Blog: CEO’s Corner

agsdix-fas fa-home

Blog: Home

agsdix-fas fa-pen-fancy

Blog: CEO's Corner

agsdix-fas fa-code

Blog: Tech Talk

Blog: Product Releases

agsdix-fas fa-chalkboard-teacher

Blog: Viewing

Blog: Conversion

arrow left circle icon Blog: CEO’s Corner

Why Did My 10,000 Page Document Fail…

by | Jun 23, 2014

…when we were driving 1,000 users on the Server?

It was a shock to learn a few years ago that Corporate IT departments often live in blissful ignorance of what their users are doing. Yes, they provision all the hardware and most of the software for the enterprise, but then, apparently, they lose touch and the users can get creative.

In one way, it’s reassuring. The hardware and the software running in the enterprise systems is so flexible and capable that it can do many things—sometimes more than was ever intended. Even if limits are stretched to the breaking point, very rarely does IT get called to examine the issues. Most departments will make do with what they have to avoid the inconvenience and delay of contacting their IT managers.

So perhaps it’s not surprising that corporate deployment rollouts of new systems sometimes get slammed immediately. For example, system development will create a new system (say for document management workflow), run tests on use cases based on what they understand the scenarios to be, and then deploy to the greater organization. But: wham!

What the user group may not have told them was that since the old system was installed years ago, new document types were added, color documents were used more frequently, and new uses were found for the system. Therefore, more people were working on the system and pushing its limits. You expect a newer system to more than handle the old load, but if the workload is different than anticipated by IT, special configurations and tuning might be required for optimal performance. One important trend today is for new systems to replace a client-based structure where most of the processing was done on the client to a newer server-based system where the client is very light and most of the processing will occur on the server. For certain scenarios where a lot of processing historically occurred on the client, this kind of shift could cause poor overall performance on the replacement system.

Today, there are many reasons for centralizing the processing, including:

  • Centralized document storage and better control of security
  • Upgrading a few servers is usually much simpler than upgrading hundreds or thousands of clients
  • Greater compatibility with a greater range of clients, including Macs, PCs and mobile devices
  • The ability to support clients all over the world much more easily.

 

In this kind of scenario, a powerful server farm with more cores and more memory will offer greater productivity and efficiency than the older system (as long as you actually upgrade the server farm). Ignore that advice at your own peril.

What many people don’t recognize, even in IT departments, is that large documents (and especially large color documents), take large resources to handle efficiently. There is a big difference between a 10 page fax file and a 10,000 page color PDF file. If you don’t plan for that, you will be very sorry.

So what’s the quick conclusion?

  1. When upgrading your systems, gather all the use cases you can find.
  2. Don’t skimp on server resources – Memory and cores are more critical than ever!
  3. Make sure your server software is upgraded at the same time as the hardware.
  4. Before a rollout, test exhaustively against a realistic use case. If you’re going to have 1,000 users working with 1,000 page PDF documents, test that scenario. Don’t take a shortcut.
  5. Make sure your vendor offers high priority support for your rollout. For extra fees, most companies offer 24×7 support for production issues. Get it and tell your vendor in advance when you’re rolling out.