Skip navigation
All Places > MPS & General Discussion > Blog

It's The Best Thing Since Virtual Computing!

Cloud Services represents the next step in the IT frontier. Virtual computing allowed a single, beefy server to now represent 4 or more servers, but it still had to run as purchased software licenses on the local network. Cloud abstracts the infrastructure even further by keeping the underlying hardware and hosting solution out of the organization's domain, so that only the services supported by that now-hidden infrastructure are the only things with which to interact, and you only pay for what gets used. It's a pretty cool concept.


At the same time, Pharos Blueprint and Uniprint represent the best Print Management solutions available for on-network implementation today. But what if your on-network environment -- the data center, the server room, the server closet, the rack or table in the supply room -- are going away? There's always Pharos Beacon, our true Cloud Platform solution...if you want to move to a multi-tenant, hosted print management solution.


It seems a big chasm to cross, that "my computer/your computer" gulf. And for many reasons (sometimes internal, sometimes external), you may not be on the "all-in" bandwagon yet. A "private" hosted solution, like renting compute and networking resources from AWS, Microsoft Azure, IBM Cloud, or Google Cloud, can be a great intermediate step, allowing you to maintain your investment (time, knowledge, money) in Blueprint or Uniprint and the associated accessories (MobilePrint, iMFP solutions) while still allowing you to say "good bye" to your server rack.


This article will help you understand what to expect when moving to a cloud provider and the pitfalls to avoid in The Brave New World (thank you, Aldous Huxley).


Evolution, Not Revolution

Memes and t-shirts abound with the message: "Cloud: Just somebody else's computer." It's true. Popular cloud-based storage services like DropBox are really just big arrays of disks with a nifty web front-end to manage the stuff you put there. Where you win is in the support array behind it:

  • Rapidly deploy a new server as needed, anywhere in the world
  • Only pay for the resources you use
  • Get more computing power and storage for less cash

But with the good, potentially, comes the bad. Your server resources are not in your four walls anymore, so you have to be more concerned with:

  • Network Latency. Your data and functions are now happening over your external WAN connection: the Internet. You need to factor increased utilization into your broadband equation. How much you may need to add depends on how much you use and clients' willingness to sometimes wait longer for things.
  • Data security/integrity. All cloud providers offer some type of virtualized network function to separate your traffic from others, with some type of mechanism to establish VPN (virtual private networking) connectivity. The tunneled, point-to-point data encryption keeps that data safe.
  • Availability. Your broadband provider becomes a single point of failure, and any outage or QoS problem could bring the entire operation down or, at best, deliver poor application performance.


"Lift and Shift" as a Viable Migration Strategy

In most cases, transition to Cloud's "Infrastructure as a Service" (IaaS) platform is just a "lift" out of the organization's data center (or server closet) and a "shift" to a rented infrastructure full of AWS EC2 instances (Elastic Compute Cloud, or EC2, is the marketing name for virtualized computing resources, or servers) identified as migration targets. And while it sounds like a half-step forward, in many cases it is the best first choice when going to the Cloud. Here's how it works:


The current Pharos Blueprint implementation is comprised of 8 servers:

  • One (1) server hosting Microsoft SQL Server
  • One (1) server hosting Pharos Blueprint as an Analyst
  • Four (4) servers hosting Pharos Blueprint as a Collector for Secure Release Here (SRH) printing
  • Two (2) servers hosting Pharos Blueprint as a Collector for Print Scout management, Policy Print, and job recording

Within that, there is an F5 BIG-IP load balancing solution hosting one Global Traffic Manager instance and two underlying Local Traffic Manager groups for the four SRH servers for geographic isolation as well as fail-over.



Cloud is not simple. At least, not its design. Amazon Web Services, for example, offers 24 different categories of service offerings spanning the more mundane, like Storage and Compute, to "Buck Rogers" stuff like Machine Learning and Content Delivery. A standard rule of thumb with cloud migration projects is to start simple (the "KISS" philosophy) and then approach it as an iterative, human design journey, methodically adding and improving as the solution stability and knowledge base grows. Management (cost containment) and Users (general availability and change management) are both usually very happy with that type of plan. Here follows the "4 Simple Steps" to migrating to Cloud Services. I will be using AWS as the service provider for Cloud Services and Blueprint Enterprise as the Pharos solution being migrated; the other providers offer equivalent features and functions, and with minor exceptions, Uniprint follows the same process.


The 4 Simple Steps to Cloud Migration

Step 1. Design.

Within AWS, focus on the Compute and Networking offerings. This means that, using the example above, an initial Cloud implementation will be made up of

  • Eight (8) EC2 instances running Windows Server (because that's what I have now)
  • One (1) Virtual Private Cloud (VPC)
  • Either AWS Direct Connect (1 Gbps or 10 Gbps) or a VPN to connect the VPC with my network


...A quick note on Cloud Database Services

Everybody offers a cloud version of Microsoft SQL Server. In AWS, it is called RDS, or Relational Database Services. RDS for SQL Server sounds very attractive based on its "sell sheet": automatic backup, automated failover to another Availability Zone, and so on...but you lose any chance of establishing a "SYSADMIN" account in RDS, so simple things like installing Blueprint's databases become arduous, impossible tasks. There is a way to migrate local databases to RDS by first using S3 (AWS' storage platform) to store backup (.bak) files of the database. Don't do it; it is far better to stand up a suitably-spec'd EC2 instance and install SQL Server using a purchased license and administer it as if it were hosted on premise.


EC2 Instances

Amazon offers a staggering number of pre-configured Amazon Machine Image (AMI) templates for Windows servers. Focus on the essential: Windows version, how many CPUs, how much RAM, how much storage? The Blueprint Installation Guide, beginning on page 19, has several worksheets to determine what those need to be. Then, it's a matter of picking which AMI comes closest to the plan. If the math lands in between two options, pick the more robust choice.


 PRO TIP: During Design, settle on a normalized Collector specification. During Build, create an EC2 instance from the best-suited AMI. Install all of the necessary prerequisites, and then save out an AMI of your own to serve as the template for the Collector server infrastructure going forward. This saves time (and error) on subsequent builds.


The Virtual Private Cloud (VPC) and Connecting to the Network

The VPC is just a network, made up of one or more segments (subnets). The VPC is defined by providing a name, an IPv4 subnet (in classless inter-domain routing, or CIDR, format: provides addresses >, whether IPv6 is desired, and whether the tenancy of the VPC will be dedicated (running on dedicated hardware) or simply use the default.


Once that is defined, other things come along for the ride, like

  • Network ACLs. Firewalls. 'Nuff said.
  • DHCP Options. Much like defining a DHCP scope in Windows, this defines the domain name, DNS servers (either Amazon-provided or a service provider's), time server, NetBIOS (WINS) servers, and the NetBIOS node type.
  • Routing tables. Since this is a local network, each subnet must have a router with defined rules to direct traffic.


In most cases, the scenario is extending an onsite network into the cloud, so choosing "VPC with a Private Subnet Only and Hardware VPN Access" from the VPC console wizard is usually the best choice and will provide all the options you need to get started. However, depending on the size of the organization and other factors (like aversion to network latency), exploring a Direct Connect option might be better, provided that all prerequisites are met.


AWS Direct Connect

Direct Connect is an Amazon service that uses high-speed (up to 40 Gbps), low latency connections directly into your network.This is a high-cost option and requires that you are close to one of Amazon's sites.


Step 2. Plan

With the Design in place, map out what needs to be done. At a high level, a Plan would look something like this. For simplicity's sake, this plan does not include migration of the database.

Milestone 1. Connectivity. Week 1, Days 1-5.

  1. Build out the VPC and connect it to our network.
    • Dependencies:
      • InfoSec sign-off
      • [Optional] Increase WAN connection bandwidth
      • [VPN connectivity established]/[Direct Connect process completed]
  2. Design and implement the necessary Network ACLs, DHCP options, and routing tables.

Milestone 2. SQL Server and Analyst EC2 Builds. Week 2, Days 6-7.

  1. Create an EC2 instance using an AMI that meets or exceeds the hardware requirements for the SQL Server in the VPC and join it to the domain.
  2. Test connectivity from a client (file share to C$ or similar).
  3. Create an EC2 instance using an AMI that meets or exceeds the hardware requirements for the Analyst server in the VPN and join it to the domain; install all necessary Blueprint pre-requisites.
  4. Install SQL Server {version} on identified EC2 instance, apply management templates, and define initial permissions.
  5. Validate connectivity between SQL server and targeted Analyst server by defining an ODBC connection to the "master" database.

Milestone 3. Install Blueprint. Week 2, Days 8-9.

  1. Install the Blueprint Analyst software on the EC2 instance for the Analyst.
  2. Create an EC2 instance using an AMI that meets or exceeds the hardware requirements for the Collector server in the VPN and join it to the domain; install all necessary Blueprint pre-requisites.
  3. Test connectivity to the Analyst from a client (file share to C$ or similar).
  4. Test connectivity to the Collector from a client (file share to C$ or similar).
  5. Install the Blueprint Collector software on the EC2 instance.
  6. Apply any updates applicable to the general release to both the Analyst and Collector.
  7. Validate connectivity between client endpoints and the Collector over the necessary ports (see Network Ports).
  8. Create an AMI using the tested Collector as the source.
  9. Deploy the remaining Collectors using the new AMI.
  10. Create the necessary certificates, install them on the Collector servers, and bind HTTPS/TCP 443 for IIS.

Milestone 4. Client Connectivity. Week 3, Days 11-15.

Desktop and Print Server Print Scouts

  1. Run the Print Scout installer on a workstation with the /reinstall switch.
    • Change the /serverurl property to the appropriate Collector name.
    • Apply the /serverlessurl property to the appropriate Collector name. [Workstations only]
    • Apply the /printserver property [Print Servers only]
  2. Verify that the workstation is attached to the appropriate Collector.
  3. Verify that the workstation has the "Pharos Secure Queue" installed.
  4. Print a test job to the Pharos Secure Queue on the workstation.
  5. Log into the "MyPrintCenter" website on the Collector and verify the job is shown. NOTE: Polling from the workstation to Collector for stored print jobs is once every 60 seconds; refresh the Job List if needed.
  6. Create the workstation package for the /reinstall with the necessary switches.
  7. Create the print server package for the /reinstall with the necessary switches.
  8. Test the workstation package; verify connectivity to the Collector and the Pharos Secure Queue.
  9. Test the print server package; verify connectivity to the Collector.
  10. Deploy Phase I.

Terminal-hosted and Integrated MFPs

  1. Duplicate a terminal definition for a device from the on-site Production Analyst to the cloud Production Analyst. After the first one, ensure that the new terminals are using the correct Terminal Type (not Generic).
  2. Redefine the host server setting to a cloud Collector.
  3. Validate that the terminal type updates and that there is a listed Collector.
  4. Modify (if applicable) the terminal settings.
  5. Make the appropriate Secure Release Here "default" settings changes to the terminal type.
  6. Migrate Phase I MFPs. NOTE: If more than one terminal type exists, perform steps 1 > 5 on a single representative device prior to migrating those terminal types.

Milestone 5. Phase II. Week 4, Days 16-20.

  • Deploy Phase II client workstations and print servers for Print Scout parent Collector changes.
  • Deploy Phase II MFPs.

Milestone 6. Monitoring, Measurement, Management. Weeks 3-5, Days 11-25.

Ongoing verification that converted clients are connecting with the appropriate Collectors and that job data is being collected, uploaded to the Analyst, and that publications are successful. Correct gaps as necessary and repeat.

Milestone 7. Project Close. Week 6, Day 26.


Step 3. Build

Use either the Management Console or the Command Line Interface (CLI) to build everything out. Follow the plan, following Milestones 1 through 5.


Step 4. Milestone 6.

Monitoring, Measuring, and Managing the client-side roll-out is as important a task as any of the previous ones. By the end of Phase I, the evaluation of capacity assumptions made during the Design phase can begin. It may be determined that the Collector capacity needs adjusting up or down based on projected data. Roll Phase II before making any adjustments, as Phase I data may be incomplete.


Dismantling the Onsite Infrastructure.

Once the migration to the cloud-hosted environment is complete, dismantle the onsite infrastructure from the outside in, shutting down the Analyst last to ensure that all remaining data in the onsite implementation has been successfully published to the SQL Server databases. Retain this data, potentially migrating it to an AWS RDS SQL Server instance (since it's static data, there's no need for a SYSADMIN account to manage things) for ongoing reporting.


When troubleshooting connectivity problems between hosts, one of the most effective tools is the packet capture. Software products like Wireshark and Microsoft's Network Monitor can quickly point you in the direction towards resolution in a relatively short time, but security-conscious IT organizations across a multitude of industries and markets are systematically putting the "kaibosh" on installing packet capture software on servers...which is typically where it needs to go.


Getting Around Without It

There are a couple of ways to get around the "no packet capture on my server" problem. The more involved approach involves creating a SPAN port configuration on a Cisco switch, and attaching a workstation with capture software to the SPAN port. But this requires getting network people involved, complicated or untimely request approvals, and other very inefficient activities. So, onto the leaner, meaner, more efficient way: netsh.


Using Netsh for Packet Capture


What's Netsh?

Netsh is a powerful command line utility that was introduced by Microsoft with Windows 2000 (where it was part of the Resource Kit; it has since just been installed with the operating system). At its most basic, netsh lets you look at network configurations on both local and remote systems, but it also lets you reconfigure, backup, and restore configurations as well. It also lets you capture traffic, and that's what we're going to learn about today.



If you go to Google and just type "using netsh" you will find a host of returns that showcase the utility's interface and Windows Firewall management capabilities, but very few of them talk about Trace. To begin interacting with Trace, go into an administrative command prompt (or Powershell prompt) and type netsh followed by the ENTER key. Once in the netsh shell, type trace and then the ENTER key to go into the Trace context.


Once inside, you can start to set up your trace. In general, it is bad form (unless you just need to snag a capture and get out of there) to capture everything. Typically, you only need to see the packets from either a specific host or small group of hosts, or across a specific TCP or UDP port. There are several capture filters available, and all of them can be reviewed by typing show capturefilterhelp in the Trace context. A helpful starter command within Trace is:


netsh trace start capture=yes IPv4.Address= protocol=6 tracefile=c:\temp\tcptrace.etl


This trace has two filters:

  • IPv4.Address. Specifies the IP address for capture: Specified this way, the IP address can either be the Sender or the Receiver.
  • Protocol. These are the available protocols that are present in the capture. "6" indicates all TCP protocol traffic. Popular Protocol types are 1 (ICMP), 6 (TCP), and 17 (UDP). Note that Trace Capture does not filter by port.

It also directs the trace output to a file, tcptrace.etl, in C:\Temp. The directory structure (not the file) must be present prior to running the command, or it will not capture to file.


If you need to continue trace between sessions, use the persistent=yes command in the trace. This will persist tracing, even upon reboot of the host. This is helpful when attempting to troubleshoot things like DHCP addressing problems. To close the capture, type:


netsh trace stop


Advanced: Capturing on a Remote Host

Sadly, there is no way to use netsh's "remote" (-r hostname) option to capture traffic on the server from the safety and security of your Windows desktop machine. If you enable the remote context, typing "trace" will just give you an error. But with a little tenacity, there is always a way, and that way is to use SysInternal's (a tiny little division of Microsoft) PSExec utility, part of the PsTools suite. Once downloaded, the basic syntax from the local computer is:


psexec \\hostname netsh trace start capture=yes tracefile=c:\temp\tcptrace.etl


On the local computer, you will get a message saying that the netsh command exited with "error code 0" which means that it exited successfully. When the operation you're hoping to review is completed, type:


psexec \\hostname netsh trace stop


The log file will be saved to C:\Temp on the remote system.


Analyzing the Trace File

To read the produced ETL file, you can simply use Microsoft Message Analyzer or Microsoft Network Monitor. If you use Network Monitor, you must go to Tools > Options and set the Windows parser as "active". Then, just use those tools' options to further filter (for example, looking for TPC 515 traffic) and see where the problem is coming from.


If you want to use Wireshark, you have to convert the ETL file to the CAP format. You can do this in PowerShell:

$s = New-PefTraceSession -Path "C:\output\path\spec\OutFile.Cap" -SaveOnStop
$s | Add-PefMessageProvider -Provider "C:\input\path\spec\Input.etl"
$s | Start-PefTraceSession


and then open the CAP file in Wireshark.


And that's it! Happy capturing!



As a reference, and a launching point for more "Fun With Trace" start at Netsh Commands for Network Trace.


There is a defect within Windows 7 that will cause workstations to not reliably detect an attached print server as a Windows 2008/2012 server.  This forces bizarre “point and print” behavior resulting the print driver's settings within in the Windows Registry to become corrupted.  The corrupted / missing values within the registry, typically within the “Dependent Files” key, will cause inconsistent and unexpected output when printing to any print queue using the affected print driver.



Printing through an affected print queue may result in one of more of the following conditions:

  • Corrupt spool files that are discarded by the print server.
  • Corrupt spool files that are discarded by the printer.
  • Printed output containing missing glyphs (letters).
  • Printed output containing missing sections of text, graphics or images.
  • Printing anomalies such as multiple pages of information printing atop one another on a single page.


An affected print queue is  quite easy to spot through inspection of the queue’s printing  preferences. 


The illustration to the right  shows the printing preferences of a proper, non-corrupted HP print  driver.  Note there are manufacture specific options and settings, and various tabs for quality, paper, finishing, color, etc.


The printing preferences shown in the illustration to the right depicts those of a corrupted print driver.  Note the manufacture specific settings are not present.  The available settings are consistent with a generic, text-only printer.


Printing through an affected print queue, like the one shown to the right, will result in very inconsistent and unexpected output, as well as the potential for the complete loss of the print job.



Digging a bit deeper into the Windows registry, it will become obvious that values are missing from required registry keys, such as the “Dependent Files” key.  The illustration below shows a null value within the Depend Files key of the print driver’s registry settings.


Missing Dependent Files.JPG


The registry key containing print driver specific settings is:

     HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Print\Environments\Windows x64\Drivers\Version-3

Correcting an Affected Print Driver

Please be aware that simply correcting the print driver will not prevent the print driver from becoming corrupted once again in the future. To prevent any future occurrence, the Microsoft provided hot fixes
listed within the “Prevention” section of this document must be applied to the workstation.

  1. Remove (delete) all affected print queues from the workstation.
  2. Remove the affected print driver and print driver package from the workstation.
  3. Install the Microsoft supplied hot fixes referenced within the “Prevention” section of this document.
  4. Reboot the workstation.
  5. Re-add the desired print queues and print drivers to the workstation.


Prevention of Future Occurrence

Two Microsoft hot fixes have been released to address this defect in Windows 7.  For more information, please refer to the Microsoft knowledge base articles listed below.

UPDATE: Pharos Blueprint Enterprise 5.3 features IPPS connections for Secure Release Here print jobs natively, without the need for significant operating system or printer-level configuration. All that is required is a certificate on the printer that supports host name and IP address (some devices, even when configured with a host name, continue to provide the Blueprint Enterprise Collector with a URL that is comprised of the IP address of the printer).


The security of some types of print jobs has always been important:

  • Financial and accounting documents
  • Documents containing sensitive and/or private data
  • Human resources information

Pharos Systems' Secure Release Here capability in Blueprint, Uniprint (and soon to be Beacon!) helps to further the security model by storing held jobs in an encrypted state using an AES-256 algorithm. And that's great for the jobs "at rest" on the Pharos server, but what about the data when it's going from your computer to the server, or the server to the printer when the job is released?


Well, it's unprotected. Using a network utility like WireShark or Microsoft Network Monitor, data packets can be captured at the server or the printer and extract the raw PCL or PostScript code. Once extracted, any PCL or PostScript viewer utility can easily create a viewable (and printable!) PDF file.


Securing the Gap

For Pharos, product-provided Point-to-Point (or Click-to-Clunk) encryption/security is a top priority, and we are actively working on this across our platforms. In the short term, however, IPSec (Internet Protocol Security) can be used to secure the data flowing from the workstation to the print server, and from the print server to the printer. And because the solution rests on IPSec -- which operates at the Network layer -- it requires absolutely no change in the Pharos software to use it!


The print device at the end of this system must support IPSec, or jobs will not print because the printer has no way to decrypt the inbound file.


At a high level, the following events take place:

  1. The workstation is configured to transmit encrypted data when printing.
  2. The print server is configured to accept the encrypted data.
  3. The print server is configured to transmit encrypted data to the printer.
  4. The printer is configured to accept the encrypted data.


Configuring the Windows Server - Inbound Print Jobs from Clients

IPSec configuration in Microsoft Windows Vista/7/8/10 and Windows Server 2008/2012 is best performed using Windows Firewall with Advanced Security. Step one, then, is:

  1. Launch the "Windows Firewall with Advanced Security" console.
  2. Enable it for the profile(s): Domain, Private, Public needed for your specific configuration.
  3. Create a new Connection Security Rule.
  4. Select the Custom option and click Next.
  5. Define the Endpoints. For the print server rule, Endpoint 1 is the server, and Endpoint 2 is the client (and, in this case, the client includes the printer). It is a best practice to choose "These IP addresses" for Endpoint 1, click the "Add..." button and type in the IP address of the server. If the server's IP address can change because it is using a non-reserved DHCP address, opt for "This IP address range" instead and include the DHCP scope. Alternately, input the subnet in CIDR format (example: and continue to use the "This IP address or subnet" option.

    Continue to leave Endpoint 2 set to "Any IP address". Click "Next >".

    If the number of printers connected to the print server is small, or in consistent subnets, you can improve security by specifying the printers' IP addresses in Endpoint 2

  6. Choose the "Require authentication for inbound and outbound connections" option and click "Next >".
  7. Select the "Advanced" Authentication Method. Click the "Customize" button.
  8. Click the "Add..." button under the "First authentication methods" section. There will be no "Second authentication method" defined.
  9. The authentication method makes available several credential options. In a larger environment, it is best to use the "Computer certificate...(CA)" option and then choose a pre-installed enterprise Root Certificate Authority (CA). For the purposes of this tutorial, the "Preshared key" method will be used. Select the "Preshared" option and type in a passphrase. This passphrase will be used for the entire configuration going forward. When the credential has been defined, click OK and then OK again to set the Authentication Method.

    Consult Microsoft's documentation on TechNet regarding the various authentication methods and their configuration. A good starting place is Windows Firewall with Advanced Security. As mentioned earlier, "Preshared Key" is great for demonstration and teaching purposes, but it is not considered a robust way of securing a production environment. Ensure that the certificates used for IPSec come from verified and trustworthy sources.

  10. At the Protocol and Ports step, choose the TCP "Protocol type". Because this is the server inbound side, specify 139 and 445 for "Endpoint 1 ports" and leave "Endpoint 2 ports" at "All Ports". NOTES: Because this step does not allow multiple Protocol Type selection, another Connection Security Rule will need to be created to handle the UDP ports used by Windows Printing: 137 and 138. Additionally, if non-Windows clients (MacOS or Linux, for example) need to be accommodated, add network port 515 to the TCP port list. Click "Next >".
  11. Next, assign the new Security Rule to the profiles necessary for the network. If all three profiles are enabled for Windows Firewall, choose all three here. Click "Next >".
  12. Name the new Rule. Click "Finish" when done.


At this point, re-engage the New Rule wizard to create the rule for the UDP ports (137,138). Follow steps 1 through 12, replacing step 10 with this:

This satisfies the printing client, but not the outbound (TCP 515 and/or 9100) to the printer, which is covered in....

Configuring the Windows Server - Outbound to the Printer

As will be shown later, the printer will be configured to require encrypted/authorized communications as well. This means that the "send" function needs to be encrypted/authorized at the server. To begin:

  1. Create a new Connection Security Rule.
  2. Choose the Custom rule type. Click "Next >".
  3. At the Endpoint step, Endpoint 1 still remains the server, and the printer(s) that will be used as release targets are Endpoint 2. Like in the earlier rules, it is a best practice to specify the IP address of the server as Endpoint 1. Click "Next >" when finished.
  4. The Authentication remains to require at both inbound and outbound connections. In general, however, this is going to be a one-way street from the server to the printer. Click "Next >" to go to the next step.
  5. At the Authentication step, choose the "Advanced" option and then click the "Customize..." button.
  6. Click the "Add..." button under the "First authentication methods" section. There will be no "Second authentication method" defined.
  7. The authentication method makes available several credential options. In a larger environment, it is best to use the "Computer certificate...(CA)" option and then choose a pre-installed enterprise Root Certificate Authority (CA). For the purposes of this tutorial, the "Preshared key" method will be used. Select the "Preshared" option and type in a passphrase. This passphrase will be used for the entire configuration going forward. When the credential has been defined, click OK and then OK again to set the Authentication Method.

  8. At the Protocol and Ports step, choose the TCP "Protocol type". Keep "Endpoint 1" at "All Ports" and set "Endpoint 2" to "Specific Ports", using 515 and/or 9100 as the port numbers. If it is possible that some printers will be configured with the "RAW" protocol and others as "LPR" endpoints, then specify both. Otherwise, just specify the one or the other. Click "Next >" to continue.
  9. At the Profiles step, choose the profiles needed for the specific network's configuration. Click "Next >" to continue.
  10. Name the rule. Save the rule by clicking "Finish".


Managing the Inbound and Outbound Server Connections with Windows Firewall

Because Windows Firewall is running, the ports used by the Pharos services and those for IPSec communications need to be white-listed for Windows Firewall, or nothing will happen. In a Blueprint Enterprise implementation on a Collector, the following ports, at minimum, will be needed:


Protocol TypePort NumberAllow InAllow Out


* - This communication will need to be allowed when encrypted.

** - TCP 515 is only required "In" when a non-Windows client is accessing the shared printer via LPR/LPD.

** - This is the default Blueprint port for communications. It may be different depending on the site configuration. Check with the Blueprint Server Configuration utility.


Example: Creating an Encrypted Connection Inbound Rule

  1. Create a new Inbound Rule.
  2. Choose the "Custom" option. Click "Next >".
  3. Choose "All Programs" and click "Next >".
  4. Choose the TCP "Protocol Type". Then choose Specific Ports for the "Local port" option and enter the desired port(s). Separate non-contiguous ports with a comma. Leave "Remote port" set to All Ports. Click "Next >".

  5. Keep the "Scope" step at its defaults. Click "Next >".
  6. At "Action" choose "Allow the connection if it is secure" and click the "Customize" button.
  7. For complete security, choose the "Require the connections to be encrypted" option. Click "OK" and then "Next >" to move to the next step.

    ONLY perform steps 6 and 7 if you desire complete encryption across the client platform. Configuring the white-list rule in this way will create an operational challenge for clients that are not configured for IPSec communications.

  8. Leave the "Users" step at the default. Click "Next >".
  9. Leave the "Computers" step at the default. Click "Next >".
  10. Define the rule for the necessary profiles. Click "Next >".
  11. Name the rule.


Once the ports have been white-listed, it is time to move on to....


Configuring the Windows Client for IPSec

Configuring the Windows client follows the same procedure outlined above for the server. The only differences are:

  • There is no need to configure any printer ports (TCP 515 or 9100). Simply configure the two TCP ports (139,445) and UDP ports (137,138).
  • Endpoint 2 is the Windows server. This will cause most connection options to "flip flop" during configuration: Remote is now Local and vice versa.

Now that the Windows environment is secured, move on to the printer fleet.


Configuring the Printer for IPSec

In this example, an HP LaserJet 700 mfp M775 is being configured. Other manufacturers, and other models, will have different interfaces and implementation steps, but the configuration options will be similar.


The incorrect configuration of IPSec/Firewall rules on a printer may render the printer inoperable. To restore a printer that cannot be managed nor used will require a factory reset by a manufacturer-trained technician in order to comply with any in-effect warranty or service agreement.


  1. Log onto the printer's management web page as an administrator.
  2. On the "Networking" tab, locate the IPSec/Firewall link. In this case, it is under the "Security" heading.
  3. Click the "Add Rules..." button.
  4. At the "Specify Address Template" step, click "New...".
  5. Select the printer's IP address under "Local Address" and enter the server's IP address as the "Remote Address". Click "OK" to accept the changes.
  6. Select the new address template and click "Next>".

  7. Click the "New..." button on the "Specify Service Template" step.
  8. Click the "Manage Services" button.
  9. Scroll to pick the LPD and, if desired, the P9100 services. Click "OK" to return.
  10. Provide the template a name. Click "OK".
  11. Select the new template and click "Next>".
  12. Select the "Require traffic to be protected with an IPSec/Firewall policy" option and click "Next>".
  13. Click the "New" button to create a new IPSec template.
  14. Name the IPSec template and choose "IKEv1" with "High interoperability/Low security" and click the "Next>" button.
  15. Enter the passphrase used when creating the Microsoft Windows IPSec rules into the "Pre-Shared Key" field. Click "Next>" when finished.

    Again, using a pre-shared key to manage encryption and security is a risk. It is better to configure IPSec on the printers using a CA certificate. Consult the printer's administration guide or online resource to install and configure IPSec with certificates.

  16. Select the new IPSec template and click "Next>".
  17. A summary screen is displayed. Click the "Finish" button.
  18. Clicking the "Finish" button results in a dialog box verifying that the new rule be activated, and also if the "failsafe" option should be enabled. Choose "Yes" for both (and disable failsafe once everything has been tested; it represents another security risk), and then click "OK".
  19. This returns to the primary IPSec/Firewall page. IPSec is now enabled, as is the rule. Click the "Apply" button to set everything.

    A confirmation dialog box will display. Click "OK" to accept.


This may sometimes result in a "405 Method Not Allowed" message on the web page, rather than the administrative web page. Simply retype the printer's IP address in the address bar of the browser and log back in.



In a small network, manual configuration -- while bothersome -- is not a chore. Larger scale deployments are best served when using management tools to perform the tasks. Active Directory Group Policy Objects (GPOs) make deploying IPSec across the entire environment, or select subset, very easy. Similarly, most device manufacturers provide a centralized administration application that will readily deploy IPSec to the fleet. These "automation" tools are out of scope for this tutorial, but learning to use them is a skill set highly valued in today's security-conscious environment.


As was said earlier, look for the Pharos end-to-end solution on the horizon.


DISCLAIMER: This article is provided as-is for informational purposes only and Pharos Systems International, Inc., including its agents, partners, and subsidiaries, is held harmless in any event that its use causes any disruption in an environment due to its implementation. Pharos Systems International, Inc. does not recommend, nor endorse, a specific third-party product by its inclusion in this article; nor does it, or its agents, guarantee the information contained in this article to be free of flaw or defect. For specific questions as to the fitness of the process described herein, please consult your printer or operating system manufacturer's technical support resources. The processes here may not be satisfactory in every environment, as each network infrastructure has its own unique configurations, challenges, and availability.


  • Print jobs are hard to read.
  • The text on the paper is a very light gray/grey.
  • Screen prints from terminal emulation program/application are useless.



  • QWS3270
  • Terminal Emulation software
  • "Universal" print driver
  • Pharos Uniprint
  • Pharos Blueprint
  • Secured or Direct queues



The QWS3270 application, as other terminal emulation software does, ignores the background color when printing a screen from the mainframe application. By default, the application prints in Color, so in the process of printing the job, QWS3270 queries the print driver to determine if the printer supports color. Further, QWS3270 ignores the background when printing, sending only the text information.

While GREEN TEXT ON A BLACK BACKGROUND is nice on the screen, that green color doesn't translate well to grayscale, if the printer the job is delivered to is a monochrome laser printer (remember, back in the 60s, 70s, and 80s, most of this output was to green-bar paper on tractor-feed impact printers, so anything came out black; not so on a printer that is capable of 256 shades of gray). Let's consider our green, which in the monitor's color space is defined as R=0, G=255, B=0:

First, most printers' internal software simply converts color to gray using a rough average, where the 3 components are added together and then divided by three: (R+G+B) / 3. There's also a "luminosity" method that provides more weight to "green" because that's the color channel to which we're most sensitive; that formula is  0.21 R + 0.72 G + 0.07 B). The individual colors are first converted to a percentage: 0/100/0 and then added, so (0+100+0) / 3 = 33.3% (repeating 3). This is an example of text colored at 33.3%. It is difficult to read, even on screen. Imagine, quite sadly, what WHITE TEXT ON A BLACK BACKGROUND would look like in grayscale:                                 .



Within the QWS3270 application, start the session with the mainframe host and then go to Options > Session > Printing. Within the Printing dialog box, choose the "Black and White" option and click the "OK" button to set the selection. Now when QWS3270 prints, it ignores the driver function entirely and just sends text specified as "black."


NOTE: This is not a default setting for this and future sessions. It is only going to apply for the current session. If you desire a more permanent solution, a queue specific for theQWS3270 application must be provided to the workstation utilizing a printer driver that does not include color printing support, or, alternately, has been configured so as to appear to not support color printing.

There's been some activity lately around PostScript drivers in Windows being initially set for PostScript Level 2 language support (normally to assist with printing from some specific application), but arriving at the desktop PC with Language Level 3 support in full sway. There's a really quick remedy for that. But you need to follow some rules.

  1. Install the printer driver on the server, but don't create a queue with it yet.
  2. You need to be really handy with Notepad.
  3. You need to launch Notepad as an administrator because of the location of the file you'll be editing.
  4. My example uses the Print Management MMC snap-in, so you might want to find yourself on a Windows 2008 R2 or newer server.

Got it? Good! Let's add a propeller to our cap, shall we?


The Process

For this walk-through, I'll be using the HP Universal Printer Driver v6.2 PS3 driver. Your preparation mileage may vary, but the editing part won't.

  1. Download the driver from the manufacturer's website.
  2. Once downloaded, right-click it and go into Properties.
  3. Unblock it.
  4. The HP UPD downloads as an EXE, but it's really a WinZip self-extracting archive. I prefer 7-Zip, so I right-click the downloaded "upd-ps-x64-" and choose 7-Zip > Extract to "upd-ps-x64-\".
  5. Using the Windows Print Management MMC snap-in, add the driver to the computer.
  6. Launch Notepad as Administrator.
  7. Choose File > Open and navigate to C:\Windows\System32\Spool\Drivers\x64\3. Change the see "all files".
  8. Find the file named "hpcu186s.ppd". To be more efficient, I set the Open dialog box to display in "Details" mode and sort by Type. This lets me go the type "PPD file" quickly.
  9. Near the top of the file structure there will be a PostScript command (to be more specific, it is a Property, but we'll call it a command) called "*LanguageLevel" with a value of 3. I've highlighted it below:
  10. Change the 3 to a 2, keeping the quotes.
  11. Save the file.

Woo hoo!! We did it! Who said that printer languages were obscure and hard to meddle in? "Not this guy!" (or gal, if the better gender is reading this), says I.


That's half the battle won. At this point, I like to check for, and destroy, the file named just like my newly-edited PPD, but with a ".BPD" extension. This is a "Binary Printer Description" and is what you get when Windows has decided it can officially read your PPD file. It essentially takes all of that beautiful ASCII in the PPD and builds a "rapidly readable" binary version of it. From that point forward, Windows will only use the BPD when installing new printers...and it may not have our edit in it (so what good is that??!!?? jury's still out). So we search for it and delete it from our hard drive. But do not fear: Windows dutifully makes a new one almost on-demand when Spooler needs it, and this one will have our edit.


With that done, just install your print queue, secure it, and deploy. When you go into Propererties for the queue, you'll see that the queue's language level is 2. In the case of the HP UPD v6.2, that path is found in Properties > Advanced > Printing Defaults > Advanced > Document Options > PostScript Options > PostScript Language Level, highlighted below:


And the cool part is that I can click on that 2, and in the counter wheel control that pops up, I can't get higher than 2. And that is exactly what we wanted.



We've all experienced it.

It starts innocently enough.

You put something in a safe place.

And then forget where the safe place was.


Thankfully, HP didn't forget! If you want to download an historical version of the HP UPD, just go here: and download it. As of this writing (2016 Sept 9) you can get version 5.5 through 6.0, both x86 and x64 versions. for PCL5, PCL6, and PS3.


Now, to go find my wallet....

Richard Post

Need or Want Training ?

Posted by Richard Post Aug 5, 2016

Massive Open Online Courses (MOOCs)





All three have tremendous amounts of resources to learn just about anything you want to learn about from computers, to psychology, to astronomy.  Think of it as free college without the cost or the official degree, though on some courses you can pay for a certificate of completion.


Free Books


Free eBooks from Microsoft




PASS | Empowering Microsoft SQL Server & BI Professionals




Disclaimer :

Pharos nor its employees are affiliated in any way with any of this material or the organizations providing the material.

This is just a find and share activity that may benefit those that desire to pursue any of these opportunities.

One of the more forgotten aspects of Pharos Systems Blueprint Enterprise and Uniprint management is, ironically, one of its most important: The Database. Without it, both solutions are broken and battered. This blog entry focuses on database space. All of those 1s and 0s that make up our Very Important Information have to be stored in files on a hard disk, and if we don't pay attention, pretty soon--and when we least want it to happen--there isn't any place to put those 1s and 0s any more. Either the database file, or the disk it sits on, gets full.  While there are a host of really good monitoring systems out there, many of us do not have one of them because they can be expensive or difficult to configure...or if we have something, it's very tactical: restarting stopped services, for example.


Capacity Planning: Figuring Out How Long Until You Need More Space

The vast majority of databases out there are growing and not shrinking. Some databases are hosted by generous organizations that provide all of the storage that they could ever need. The rest of them have to be prepared to run out of space.

Why care? Well, if ignored, you deal with an emergency out-of-space situation, rather than managing a disk space change order after a six-month – or better, one year – space usage warning. The former could put you on the unemployment line; the latter could put you behind the wheel of that new Tesla (or, at the very least, encourage sustained employment).


In order to know when a given database may run out of space, you need to know a few facts. For each file group of the database, what are these values?

  • The present size of its files.
  • The amount of free space within its files.
  • The growth increment of the file, and the maximum file size (if any).
  • The free space on disk to grow those files.
  • How rapidly each file group grows, best expressed in megabytes per day.


Capturing Usage Metrics

Major monitoring tool vendors capture the necessary data listed above. That makes it quite easy since the data are already captured, and a table can be created to capture daily size information. For those who don’t have a monitoring tool, it is still possible to capture the data in a SQL job. First, you need to have a database and table where you can collect the information you need. Here is a sample CREATE script:


CREATE TABLE [dbo].[DBGrowthHistory](
      [ID] [bigint] IDENTITY(1,1) NOT NULL,
      [TimeCollected] [datetime] NOT NULL,
      [DBServerName] [nvarchar](128) NOT NULL,
      [DBName] [nvarchar](128) NOT NULL,
      [FileGroup] [sysname] NOT NULL,
      [FileLogicalName] [sysname] NOT NULL,
      [OSFileName] [nvarchar](260) NOT NULL,
      [TotalMB] [decimal](15, 2) NOT NULL,
      [FreeMB] [decimal](15, 2) NOT NULL,
      [GrowthIncrementMB] [decimal](15, 2) NOT NULL,
      [GrowthIncrementsRemaining] [decimal](15, 2) NOT NULL,
      [MaxFileSizeMB] [decimal](15, 2) NOT NULL,
      [VolumeFreeSpaceMB] [int] NOT NULL,
      [ID] ASC


Once the table is built, the following query can populate the table with useful data:


 CONVERT(datetime,CONVERT(date,GETDATE())) as [TimeCollected]
    ,@@Servername as [DBServerName]
    ,db_name() as [DBName]
    ,b.groupname AS [FileGroup]
    ,a.Name as [FileLogicalName]
    ,[Filename] as [OSFileName]
    ,CONVERT (Decimal(15,2),(a.Size/128.000)) as [TotalMB]
    ,CONVERT (Decimal(15,2),((a.Size-FILEPROPERTY(a.Name,''SpaceUsed''))/128.000)) AS [FreeMB]
       WHEN a.growth < 128 THEN
       CONVERT (Decimal(15,2),((a.size/128)*(a.growth*.01)))
       CONVERT (Decimal(15,2),(a.growth/128.00))
       END [GrowthIncrementMB]
       WHEN a.growth < 128 AND a.maxsize > 0 THEN
       CONVERT (Decimal(15,2),((a.maxsize-FILEPROPERTY(a.Name,''SpaceUsed''))/128.000)/((a.size/128)*(a.growth*.01)))
       WHEN a.growth < 128 AND a.maxsize < 0 THEN
       CONVERT (Decimal(15,2),(((dovs.available_bytes/1048576.00)-FILEPROPERTY(a.Name,''SpaceUsed''))/128.000)/((a.size/128)*(a.growth*.01)))
       WHEN a.growth > 101 AND a.maxsize > 0 THEN
       CONVERT (Decimal(15,2),((a.maxsize-FILEPROPERTY(a.Name,''SpaceUsed''))/128.000)/(a.growth/128))
       WHEN a.growth > 101 AND a.maxsize <0 THEN
       CONVERT (Decimal(15,2),(((dovs.available_bytes/1048576.00)-FILEPROPERTY(a.Name,''SpaceUsed''))/128.000)/(a.growth/128))
       END [GrowthIncrementsRemaining]
      WHEN a.maxsize < 0 THEN
      CONVERT (Decimal(15,2),(dovs.available_bytes/1048576.00))
      WHEN a.maxsize > 0 THEN
      CONVERT (Decimal(15,2),(a.maxsize/128.00))
      END [MaxFileSizeMB]
    , CONVERT(INT,dovs.available_bytes/1048576.0) AS [VolumeFreeSpaceMB]
FROM dbo.sysfiles a (NOLOCK)
JOIN sysfilegroups b (NOLOCK) 
    ON a.groupid = b.groupid
JOIN msdb.sys.master_files c
    ON a.filename = c.physical_name
CROSS APPLY sys.dm_os_volume_stats(c.database_id,c.file_id) dovs
ORDER BY b.groupname;


The above query only runs in the database for which you desire to collect statistics, which isn't very helpful.  You can easily prepend this query with an “INSERT” and “sp_MSforeachdb” in order to sweep all database stats for a SQL Server instance into the table:


sp_MSforeachdb 'Use [?];
INSERT INTO psbprint.dbo.DBGrowthHistory
 CONVERT(datetime,CONVERT(date,GETDATE())) as [TimeCollected]
    ,@@Servername as [DBServerName]
    ,db_name() as [DBName]
    ,b.groupname AS [FileGroup]
    ,a.Name as [FileLogicalName]
    ,[Filename] as [OSFileName]
    ,CONVERT (Decimal(15,2),(a.Size/128.000)) as [TotalMB]
    ,CONVERT (Decimal(15,2),((a.Size-FILEPROPERTY(a.Name,''SpaceUsed''))/128.000)) AS [FreeMB]
       WHEN a.growth < 128 THEN
       CONVERT (Decimal(15,2),((a.size/128)*(a.growth*.01)))
       CONVERT (Decimal(15,2),(a.growth/128.00))
       END [GrowthIncrementMB]
       WHEN a.growth < 128 AND a.maxsize > 0 THEN
       CONVERT (Decimal(15,2),((a.maxsize-FILEPROPERTY(a.Name,''SpaceUsed''))/128.000)/((a.size/128)*(a.growth*.01)))
       WHEN a.growth < 128 AND a.maxsize < 0 THEN
       CONVERT (Decimal(15,2),(((dovs.available_bytes/1048576.00)-FILEPROPERTY(a.Name,''SpaceUsed''))/128.000)/((a.size/128)*(a.growth*.01)))
       WHEN a.growth > 101 AND a.maxsize > 0 THEN
       CONVERT (Decimal(15,2),((a.maxsize-FILEPROPERTY(a.Name,''SpaceUsed''))/128.000)/(a.growth/128))
       WHEN a.growth > 101 AND a.maxsize <0 THEN
       CONVERT (Decimal(15,2),(((dovs.available_bytes/1048576.00)-FILEPROPERTY(a.Name,''SpaceUsed''))/128.000)/(a.growth/128))
       END [GrowthIncrementsRemaining]
      WHEN a.maxsize < 0 THEN
      CONVERT (Decimal(15,2),(dovs.available_bytes/1048576.00))
      WHEN a.maxsize > 0 THEN
      CONVERT (Decimal(15,2),(a.maxsize/128.00))
      END [MaxFileSizeMB]
   , CONVERT(INT,dovs.available_bytes/1048576.0) AS [VolumeFreeSpaceMB]
FROM dbo.sysfiles a (NOLOCK)
JOIN sysfilegroups b (NOLOCK) 
    ON a.groupid = b.groupid
JOIN msdb.sys.master_files c
    ON a.filename = c.physical_name
CROSS APPLY sys.dm_os_volume_stats(c.database_id,c.file_id) dovs
ORDER BY b.groupname;


Some notes about this query:

  • It handles both “In Percent” and “In Megabytes” growth options.
  • It handles both “Restricted” and “Unrestricted” file growth; however, unrestricted will just show the remaining disk/volume free space as [MaxFileSizeMB], so it should match [VolumeFreeSpaceMB].
  • If the file growth is unrestricted (a bad practice), the [GrowthIncrementsRemaining] value will be very high if there is considerable free volume space available. It will also be nearly impossible to predict how soon it will be before space on the volume runs out.
  • The sys.dm_os_volume_stats function is used instead of xp_fixeddrives because many SQL Server configurations utilize volume mount points to access SAN storage, rather than direct-mapping LUNs (Logical Unit Number; how the “size” of a disk is determined) as disks in the operating system. xp_fixeddrives will not show free space included volume mount points. This function is only present in SQL Server 2008R2 Service Pack 1 and newer SQL Server versions.


Creating the Report

Armed with the data, it is now time to calculate! The equation is pretty straightforward:


Days Remaining = (Growth Increment * Number of Growths remaining) / (Growth per Day [averaged])


The lowest number of days remaining from all file groups is how long you have until you either:

  1. Increase your Maximum database size, or
  2. Get more storage, or
  3. Do both 1 & 2


The only value that the DBGrowthHistory table does not carry internally is the Average Growth Per Day. If the additional data storage rate is consistent, you can look at the average of the [GrowthIncrementsRemeaining] field over just a few days and get a suitable average. Otherwise, you may need to wait a month or so before the Average function returns anything useful. A query based on the current data and the equation is:


SELECT DBName AS [Database]
       ,CONVERT (INT,(((GrowthIncrementMB*MIN(GrowthIncrementsRemaining)))/AVG(GrowthIncrementsRemaining))) AS [Days Remaining]
       ,MIN(VolumeFreeSpace) AS [Free Disk (MB)]
FROM DBGrowthHistory
GROUP BY DBName,GrowthIncrementMB
ORDER BY [Days Remaining]


This serves as the basis for the reporting and alerting mechanism to ensure that the database keeps doing its job. The results of the query can be pushed into some other tool, or wrap it around other logic such that if [Days Remaining] is less than 365 (or 180, 90, etc.)  the alarms start going off.


Things That Will Destroy Growing Room

Several things can get in the way of the Perfect Plan:


Multiple files on a drive: The chances that you have a 500GB volume with just one 30,000MB (29.29GB) file are, in a normal production environment, remote.  It is more likely that a single volume will have multiple database files.  In that event, you have to look at two options:

  • Using a calculation to split the space between the files; or
  • Setting a maximum file size for each database file.

Setting a maximum file size is a best practice.  Assigning maximum file size means growth can be easily monitored and an accurate prediction of “days-till-full” can be made. This means an impending-doom scenario can be confidently avoided - if the reported metric is used in day-to-day management.


Mixed files on a drive: This is the real problem scenario.  If the drive is used for non-SQL items (e.g., backups, or [ugh] file shares), the availability of space is unpredictable.  What solution then?  Wild guess? I think not; we are professionals here.  Assuming that continued database operation trumps somebody’s files, the best recommendation is to set the “Initial” file sizes of the database’s files to something that already fills the disk, and then set the maximum file size so that it cannot grow (in other words: the maximum file size is equal to current file size).  That way, other disk space hogs cannot grab the disk from you, and your database growth predictions will be all the more accurate.


Database Configuration Best Practices

Avoid the following database configuration options for continued success in database availability:


Growth by Percentage: No two growths in a file are likely to be the same. Calculating how many growths are left would leave Stephen Hawking in tears. Moreover, in a growing scenario, each growth is larger than the previous, raising the chance of a growth that is too large for the remaining disk. Always use a fixed number of megabytes as a growth factor. Most vendors that supply database build scripts will specify percentage growth simply because the product doesn’t know how much (or how little) the site will end up using the application. Monitor database growth over several days and then determine how much, in megabytes, to set database growth.


Unrestricted Maximum File Size: A 1TB volume on a SQL Server seems to be a lot of space. But if there are 10 databases’ files that are housed there, with each one having unlimited growth potential, it can only take one messed up stored procedure to bring down 10 databases. Again, like Percentage Growth, most vendors that supply database build scripts will also specify unrestricted growth for the database file for the same reasons Percent Growth was used. Monitor database growth and determine what the maximum size for the database file should be, based on the site’s (and application’s) data retention requirements.


Mixed files on a drive: This is being repeated intentionally. If there is any way to avoid using the same drive for database data files and any other type of file (including transaction log files, database backup files, and SSIS package files), do it! With both DAS (Direct-Attached Storage) and SAN (Storage Area Network) storage systems, there are performance benefits to separating file types across drives. It does no good to assign a growth maximum that can never be reached because other files are in the way. Note that doing this may make it difficult to refresh a lower (test or acceptance) environment with production data, because the restored file sizes would be too large.


Not Checking Frequently: Not only should you calculate the days-left-to-full of every major production database, you should look at it regularly.  By that I mean every single workday.  If some extraordinary event occurs that sucks up a whole pile of space, you need to know. What if someone puts a file onto your SQL drive without your knowledge? "Hey!  I found 570GB to hold the movies I downloaded from BitTorrent in violation of Company AUP! The DBA won't know because "No worries, there is plenty of space!" can quickly turn into "Could not allocate space for object 'whatever' in database 'PorkChopSandwiches' because the 'PRIMARY' filegroup is full..." Properly calculating days left would show you any kind of impact like that and give you an opportunity to act before the error arises.

I received this great info from one of the trainers from SQL PASS Organization.  The attached file PerfMon Setup, gives step by step instructions on setting up the Windows Performance Monitoring to monitor a server.  There are a few bits that are specific to SQL servers, but most of it is general and excellent monitoring info.   Below is some more information to take the monitoring a step further to import the data into SQL and analyse it...


Richard Post

SQL PASS Organization

Posted by Richard Post Jun 22, 2015

SQL PASS ( is an user group organization of SQL and database administrators dedicated to training others and sharing information.


They do a SQL Saturday which is a free event all around the country and the world.  I have gone several times and enjoyed it, even though I'm not a heavy SQL admin.


There is another free event called  24 Hours of PASS, a free set of online/remote hour-long seminars on June 24th.


They also have local user groups and an online user group that does free online training/seminars too.  There is one coming up on July 7th  is on SQL Server Reporting Services (SSRS 101)


If you are interested in SQL, databases, or social networking events, this is one I'd recommend.