Scott Olswold

Head in the Sky, Feet on the Ground: Moving Your Pharos Solution to the Cloud

Blog Post created by Scott Olswold on Oct 18, 2019

It's The Best Thing Since Virtual Computing!

Cloud Services represents the next step in the IT frontier. Virtual computing allowed a single, beefy server to now represent 4 or more servers, but it still had to run as purchased software licenses on the local network. Cloud abstracts the infrastructure even further by keeping the underlying hardware and hosting solution out of the organization's domain, so that only the services supported by that now-hidden infrastructure are the only things with which to interact, and you only pay for what gets used. It's a pretty cool concept.


At the same time, Pharos Blueprint and Uniprint represent the best Print Management solutions available for on-network implementation today. But what if your on-network environment -- the data center, the server room, the server closet, the rack or table in the supply room -- are going away? Pharos Beacon is your best, first choice in that case. Beacon is our "Managed Print Cloud Solution in a Box" product that takes all of the infrastructure hassle away faster than you can push the secure agent to your printer (and that's pretty darn fast!). And if you implement QR Release, you have a nearly (99.99%, anyway) touch-free secure print solution.


It seems a big chasm to cross, that "my computer/your computer" gulf. And for many reasons (sometimes internal, sometimes external), you may not be on the "all-in" bandwagon yet, so Pharos Beacon may not be for you. A "private" hosted solution, like renting compute and networking resources from AWS, Microsoft Azure, IBM Cloud, or Google Cloud, can be a great intermediate step, allowing you to maintain your investment (time, knowledge, money) in Blueprint or Uniprint and the associated accessories (MobilePrint, iMFP solutions) while still allowing you to say "good bye" to your server rack.


This article will help you understand what to expect when moving to a cloud provider and the pitfalls to avoid in The Brave New World (thank you, Aldous Huxley).


Evolution, Not Revolution

Memes and t-shirts abound with the message: "Cloud: Just somebody else's computer." It's true. Popular cloud-based storage services like DropBox are really just big arrays of disks with a nifty web front-end to manage the stuff you put there. Where you win is in the support array behind it:

  • Rapidly deploy a new server as needed, anywhere in the world
  • Only pay for the resources you use
  • Get more computing power and storage for less cash

But with the good, potentially, comes the bad. Your server resources are not in your four walls anymore, so you have to be more concerned with:

  • Network Latency. Your data and functions are now happening over your external WAN connection: the Internet. You need to factor increased utilization into your broadband equation. How much you may need to add depends on how much you use and clients' willingness to sometimes wait longer for things.
  • Data security/integrity. All cloud providers offer some type of virtualized network function to separate your traffic from others, with some type of mechanism to establish VPN (virtual private networking) connectivity. The tunneled, point-to-point data encryption keeps that data safe.
  • Availability. Your broadband provider becomes a single point of failure, and any outage or QoS problem could bring the entire operation down or, at best, deliver poor application performance.


"Lift and Shift" as a Viable Migration Strategy

In most cases, transition to Cloud's "Infrastructure as a Service" (IaaS) platform is just a "lift" out of the organization's data center (or server closet) and a "shift" to a rented infrastructure full of AWS EC2 instances (Elastic Compute Cloud, or EC2, is the marketing name for virtualized computing resources, or servers) identified as migration targets. And while it sounds like a half-step forward, in many cases it is the best first choice when going to the Cloud. Here's how it works:


The current Pharos Blueprint implementation is comprised of 8 servers:

  • One (1) server hosting Microsoft SQL Server
  • One (1) server hosting Pharos Blueprint as an Analyst
  • Four (4) servers hosting Pharos Blueprint as a Collector for Secure Release Here (SRH) printing
  • Two (2) servers hosting Pharos Blueprint as a Collector for Print Scout management, Policy Print, and job recording

Within that, there is an F5 BIG-IP load balancing solution hosting one Global Traffic Manager instance and two underlying Local Traffic Manager groups for the four SRH servers for geographic isolation as well as fail-over.



Cloud is not simple. At least, not its design. Amazon Web Services, for example, offers 24 different categories of service offerings spanning the more mundane, like Storage and Compute, to "Buck Rogers" stuff like Machine Learning and Content Delivery. A standard rule of thumb with cloud migration projects is to start simple (the "KISS" philosophy) and then approach it as an iterative, human design journey, methodically adding and improving as the solution stability and knowledge base grows. Management (cost containment) and Users (general availability and change management) are both usually very happy with that type of plan. Here follows the "4 Simple Steps" to migrating to Cloud Services. I will be using AWS as the service provider for Cloud Services and Blueprint Enterprise as the Pharos solution being migrated; the other providers offer equivalent features and functions, and with minor exceptions, Uniprint follows the same process.


The 4 Simple Steps to Cloud Migration

Step 1. Design.

Within AWS, focus on the Compute and Networking offerings. This means that, using the example above, an initial Cloud implementation will be made up of

  • Eight (8) EC2 instances running Windows Server (because that's what I have now)
  • One (1) Virtual Private Cloud (VPC)
  • Either AWS Direct Connect (1 Gbps or 10 Gbps) or a VPN to connect the VPC with my network


...A quick note on Cloud Database Services

Everybody offers a cloud version of Microsoft SQL Server. In AWS, it is called RDS, or Relational Database Services. RDS for SQL Server sounds very attractive based on its "sell sheet": automatic backup, automated failover to another Availability Zone, and so on...but you lose any chance of establishing a "SYSADMIN" account in RDS, so simple things like installing Blueprint's databases become arduous, impossible tasks. There is a way to migrate local databases to RDS by first using S3 (AWS' storage platform) to store backup (.bak) files of the database. Don't do it; it is far better to stand up a suitably-spec'd EC2 instance and install SQL Server using a purchased license and administer it as if it were hosted on premise.


EC2 Instances

Amazon offers a staggering number of pre-configured Amazon Machine Image (AMI) templates for Windows servers. Focus on the essential: Windows version, how many CPUs, how much RAM, how much storage? The Blueprint Installation Guide, beginning on page 19, has several worksheets to determine what those need to be. Then, it's a matter of picking which AMI comes closest to the plan. If the math lands in between two options, pick the more robust choice.


 PRO TIP: During Design, settle on a normalized Collector specification. During Build, create an EC2 instance from the best-suited AMI. Install all of the necessary prerequisites, and then save out an AMI of your own to serve as the template for the Collector server infrastructure going forward. This saves time (and error) on subsequent builds.


The Virtual Private Cloud (VPC) and Connecting to the Network

The VPC is just a network, made up of one or more segments (subnets). The VPC is defined by providing a name, an IPv4 subnet (in classless inter-domain routing, or CIDR, format: provides addresses >, whether IPv6 is desired, and whether the tenancy of the VPC will be dedicated (running on dedicated hardware) or simply use the default.


Once that is defined, other things come along for the ride, like

  • Network ACLs. Firewalls. 'Nuff said.
  • DHCP Options. Much like defining a DHCP scope in Windows, this defines the domain name, DNS servers (either Amazon-provided or a service provider's), time server, NetBIOS (WINS) servers, and the NetBIOS node type.
  • Routing tables. Since this is a local network, each subnet must have a router with defined rules to direct traffic.


In most cases, the scenario is extending an onsite network into the cloud, so choosing "VPC with a Private Subnet Only and Hardware VPN Access" from the VPC console wizard is usually the best choice and will provide all the options you need to get started. However, depending on the size of the organization and other factors (like aversion to network latency), exploring a Direct Connect option might be better, provided that all prerequisites are met.


AWS Direct Connect

Direct Connect is an Amazon service that uses high-speed (up to 40 Gbps), low latency connections directly into your network.This is a high-cost option and requires that you are close to one of Amazon's sites.


Step 2. Plan

With the Design in place, map out what needs to be done. At a high level, a Plan would look something like this. For simplicity's sake, this plan does not include migration of the database.

Milestone 1. Connectivity. Week 1, Days 1-5.

  1. Build out the VPC and connect it to our network.
    • Dependencies:
      • InfoSec sign-off
      • [Optional] Increase WAN connection bandwidth
      • [VPN connectivity established]/[Direct Connect process completed]
  2. Design and implement the necessary Network ACLs, DHCP options, and routing tables.

Milestone 2. SQL Server and Analyst EC2 Builds. Week 2, Days 6-7.

  1. Create an EC2 instance using an AMI that meets or exceeds the hardware requirements for the SQL Server in the VPC and join it to the domain.
  2. Test connectivity from a client (file share to C$ or similar).
  3. Create an EC2 instance using an AMI that meets or exceeds the hardware requirements for the Analyst server in the VPN and join it to the domain; install all necessary Blueprint pre-requisites.
  4. Install SQL Server {version} on identified EC2 instance, apply management templates, and define initial permissions.
  5. Validate connectivity between SQL server and targeted Analyst server by defining an ODBC connection to the "master" database.

Milestone 3. Install Blueprint. Week 2, Days 8-9.

  1. Install the Blueprint Analyst software on the EC2 instance for the Analyst.
  2. Create an EC2 instance using an AMI that meets or exceeds the hardware requirements for the Collector server in the VPN and join it to the domain; install all necessary Blueprint pre-requisites.
  3. Test connectivity to the Analyst from a client (file share to C$ or similar).
  4. Test connectivity to the Collector from a client (file share to C$ or similar).
  5. Install the Blueprint Collector software on the EC2 instance.
  6. Apply any updates applicable to the general release to both the Analyst and Collector.
  7. Validate connectivity between client endpoints and the Collector over the necessary ports (see Network Ports).
  8. Create an AMI using the tested Collector as the source.
  9. Deploy the remaining Collectors using the new AMI.
  10. Create the necessary certificates, install them on the Collector servers, and bind HTTPS/TCP 443 for IIS.

Milestone 4. Client Connectivity. Week 3, Days 11-15.

Desktop and Print Server Print Scouts

  1. Run the Print Scout installer on a workstation with the /reinstall switch.
    • Change the /serverurl property to the appropriate Collector name.
    • Apply the /serverlessurl property to the appropriate Collector name. [Workstations only]
    • Apply the /printserver property [Print Servers only]
  2. Verify that the workstation is attached to the appropriate Collector.
  3. Verify that the workstation has the "Pharos Secure Queue" installed.
  4. Print a test job to the Pharos Secure Queue on the workstation.
  5. Log into the "MyPrintCenter" website on the Collector and verify the job is shown. NOTE: Polling from the workstation to Collector for stored print jobs is once every 60 seconds; refresh the Job List if needed.
  6. Create the workstation package for the /reinstall with the necessary switches.
  7. Create the print server package for the /reinstall with the necessary switches.
  8. Test the workstation package; verify connectivity to the Collector and the Pharos Secure Queue.
  9. Test the print server package; verify connectivity to the Collector.
  10. Deploy Phase I.

Terminal-hosted and Integrated MFPs

  1. Duplicate a terminal definition for a device from the on-site Production Analyst to the cloud Production Analyst. After the first one, ensure that the new terminals are using the correct Terminal Type (not Generic).
  2. Redefine the host server setting to a cloud Collector.
  3. Validate that the terminal type updates and that there is a listed Collector.
  4. Modify (if applicable) the terminal settings.
  5. Make the appropriate Secure Release Here "default" settings changes to the terminal type.
  6. Migrate Phase I MFPs. NOTE: If more than one terminal type exists, perform steps 1 > 5 on a single representative device prior to migrating those terminal types.

Milestone 5. Phase II. Week 4, Days 16-20.

  • Deploy Phase II client workstations and print servers for Print Scout parent Collector changes.
  • Deploy Phase II MFPs.

Milestone 6. Monitoring, Measurement, Management. Weeks 3-5, Days 11-25.

Ongoing verification that converted clients are connecting with the appropriate Collectors and that job data is being collected, uploaded to the Analyst, and that publications are successful. Correct gaps as necessary and repeat.

Milestone 7. Project Close. Week 6, Day 26.


Step 3. Build

Use either the Management Console or the Command Line Interface (CLI) to build everything out. Follow the plan, following Milestones 1 through 5.


Step 4. Milestone 6.

Monitoring, Measuring, and Managing the client-side roll-out is as important a task as any of the previous ones. By the end of Phase I, the evaluation of capacity assumptions made during the Design phase can begin. It may be determined that the Collector capacity needs adjusting up or down based on projected data. Roll Phase II before making any adjustments, as Phase I data may be incomplete.


Dismantling the Onsite Infrastructure.

Once the migration to the cloud-hosted environment is complete, dismantle the onsite infrastructure from the outside in, shutting down the Analyst last to ensure that all remaining data in the onsite implementation has been successfully published to the SQL Server databases. Retain this data, potentially migrating it to an AWS RDS SQL Server instance (since it's static data, there's no need for a SYSADMIN account to manage things) for ongoing reporting.