Posted on: October 31, 2019 By: Carolyn Kuczynski

The Differences Between Backups and Disaster Recovery

By Expedient, CNSG Platinum Supplier

Modern IT departments need to be able to protect their mission-critical data and quickly respond in the event of a problem in their environment.   This capability hinges on maintaining a strategy for both backups and disaster recovery.  These two capabilities are often conflated with each other, but this blog post will explore the differences between the two and explain the situations they are needed for.


Backups have been around almost as long as computers themselves and revolve around making point-in-time copies of all data on a given system.  A good backup solution should be capable of restoring anything from a single file to an entire system to a recent known-good state.  These backups are crucial in several situations:

  • A developer accidentally drops a table from a production database, and it needs to be restored.
  • A user accidentally deletes a file from a shared drive.
  • A single server fails due to a bad software update and must be restored to the state it was in prior to the change.
  • A server is impacted by ransomware or other malware renders it unusable.

When deciding on a backup solution, several factors must be considered:

  • Infrastructure integrations
    • Whether you’re running workloads on physical servers, a local hypervisor (vCenter) or in the cloud, your backup solution should work with the underlying infrastructure to leverage capabilities like snapshots, which will enable performant backups and rapid restores of entire systems.
  • Application Compatibility
    • Applications, particularly databases, often have their own requirements for performing successful backups and restores. Before settling on a backup solution, you should ensure that it is compatible with your existing applications and be aware of any special procedures that are required to perform successful backup that can be restored.
    • If any of the issues listed above occurs, how long of an outage is acceptable (defined by Recovery Time Objective (RTO)), and how much data loss is acceptable (defined by Recovery Point Objective (RPO)).

Disaster Recovery

10 years ago, the disaster recovery plans of most organizations involved moving a set of their backup tapes off-site at the end of every week and calling it a day.  That is fine as long as the business is OK with the idea of potentially losing a week’s worth of work and being offline until a new data center is set up and all of the systems are restored.  Of course, most businesses in 2019 simply can’t survive that sort of operational disruption, so new solutions must be devised to deliver the desired level of IT availability. Here are a few scenarios that your disaster recovery solution should be able to handle:

  • The obvious – some disaster that physically damages/destroys the datacenter where your workloads exist
    • Earthquake
    • A water leak in some office that exists above the datacenter, causing a flood
  • A full network outage in your primary datacenter, so while the systems may still be running they are not accessible to your users
  • A long-term power outage at your office (assuming your infrastructure is in an office datacenter rather than a colocation facility)

Dealing with the issues above while maintaining the availability the business demands obviously requires a solution that goes beyond backups and includes additional requirements, such as:

  • A “hot site”
    • Basically, this entails a disparate location other than your datacenter where your workloads can be rapidly brought online and made available to your users
  • Continuous replication
    • To meet the aggressive RPO’s modern businesses demand in the event of a disaster, data needs to be continuously replicated to the hot-site so that data loss is minimal
    • This data replication feature is also incredibly useful for migrating workloads from one environment to another with minimal downtime
  • Service Discovery
    • After a fail-over takes place how do your users connect to the new environment? For smaller organizations, this can be as simple as desktop engineers working with users to point their local machines to the new location, but for larger organizations, this needs to happen automatically.

The considerations for a DR platform are similar but slightly different than those for a backup solution. They include:

  • Infrastructure integrations
    • A DR solution needs to be natively compatible with both the infrastructure you are protecting and the environment you plan to recover to.
    • IF you are planning to use a recovery site that is using a different hypervisor than your primary site (e.g. failing over to a hyper-scale cloud – such as AWS or Microsoft Azure – or from a physical server into a VM) it is also important that your DR solution has a way of re-configuring the machine operating system to have the correct drivers and other software packages for the new environment
    • Many modern DR solutions enable near-real-time RPO’s, however, this is heavily dependent on your network link to the recovery site and your data change rate
    • Due to differences in how cloud providers work internally, different backup solutions offer a range of RTO’s on different platforms. It is important to look at the expected RTO for the source site/recovery site/DR software combination you plan to use and ensure that the RTO for this combination meets the requirements of your business.
  • Consistency requirements
    • Unlike traditional backups, DR solutions are generally not aware of underlying applications like databases. This means that additional care must be taken to ensure that data is consistent across servers in the event of an unexpected failover.
    • Some DR solutions incorporate the idea of a “consistency group” which ensures that a group of servers are always being restored to the exact same point in time to avoid inconsistency issues after a restore.
  • Service Discovery
    • After a failover is completed, a full DR solution should take steps to ensure that users are able to reach their applications quickly and without direct intervention from IT staff
    • The traditional way of handling this is to reconfigure DNS entries to point to the new site, however, this method of failover can take a long time to propagate to all users, particularly if they are globally distributed or if long TTL’s exist on the entries themselves. Whatever the case, make sure to account for this additional time in your RTO calculations.
    • An alternate way of dealing with this problem is to fail over the entire networking stack with the compute resources and announce the same IP addresses from the new location. This requires a deep integration with the infrastructure of both the source and destination sites. By automating and orchestrating network failover, Expedient’s Push Button DR solution enables this failover approach, which enables real-time RPOs and RTOs measured in minutes.

While these DR capabilities are wonderful, it’s important to remember that as of right now most DR solutions are optimized for the recovery of VMs but are lacking in important backup features like the ability to rapidly restore a single file deleted by a user or bringing back a database table that was mistakenly dropped. Since your business will most likely require a combination of DR and backup functionality, a hybrid solution that supports both is typically recommended. Expedient’s diversified DRaaS platform – Push Button DR – enables various levels of IT resilience. Download this infographic to learn more.

Positioned in Gartner’s Magic Quadrant for disaster recovery as a service (DRaaS) and ranked #12 globally in the MSP 501 rankings for managed service providers, Expedient is an industry-leading provider of disaster recovery solutions. Learn more at


Posted on: October 15, 2019 By: Carolyn Kuczynski

Our bring your own network (BYON), access-agnostic solutions equip your customers to easily swap phone systems and migrate to the latest communications and networking technologies they need while keeping their underlying connectivity in place. Cloud-based Unified Communications as a Service (UCaaS), value-added SD-WAN and managed security services will improve customer engagement, employee productivity, network performance, service reliability and cyberthreat protection.

Featured BYON Windstream Enterprise solutions include:

  • SD-WAN – Choose the technology platform that is right for your business from two of the leading WAN edge infrastructure providers— VeloCloud or Fortinet—both options are recognized as leaders in Gartner’s Magic Quadrant, provide PCI DSS compliance, and leverage our state-of-the-art partner portal
  • UCaaS – Our many “flavors” of Unified Communications as a Service offer a more connected, collaborative workforce, with instant messaging, chat, presence, mobility, conferencing and CRM integrations—all backed by a world-class network with 99.99% always-on reliability
  • Security and Compliance – Most experts agree that a security breach for most companies is no longer a question of if it will happen, but when it will happen. Our suite of Security Services includes Cloud and CPE Firewalls, SIEM and DDoS Mitigation to shield the most sophisticated threats.

Enjoy big payouts. In addition to our standard competitive residual monthly compensation, we’ll give you a 5% bonus residual and up to 4X accelerator for BYON services!

 Everything you need from a single source. If you’re also looking for network solutions, either as a replacement or for diversity, we can do it all—BYON, network solutions or both.

Want to Learn more? E-mail Windstream Enterprise

Posted on: October 8, 2019 By: Carolyn Kuczynski

By Chris Betz, Chief Security Officer, CenturyLink

Let me put it another way: Security can be complex. The true art is making security easy to use.

As a Fortune 150 company and the second largest U.S. communications provider to global enterprise customers, we are responsible for securing our own operations through a suite of hybrid IT, cloud, networking and communications solutions — in addition to those of our customers. As CSO for this company, I can attest to the fact that the pressures security leaders face today are many.

On one hand, we have the explosion of network traffic spurred by video, 5G, IoT, connected devices and a mobile workforce; on the other, we have a justified and growing intolerance by users — both internal and external — for anything less than always-on, flawless performance. Couple this with the patchwork nature of many of today’s security solutions, which businesses are often left to stitch together on their own; the gap between security and engineering teams that often reflects security as an afterthought; and the shortage of qualified security professionals — and the picture can seem bleak.

But security can be simple: We believe that the inherent value of a security solutions provider should first and foremost be effective simplicity.

At CenturyLink, our security builds on two fundamental directives: to leverage our expansive global threat visibility and to act against the threats we see. Our unique and deep network-based threat intelligence makes our approach possible — and it is the foundation of Connected Security, our vision for seamless integration between security and the network to transform the communications of tomorrow.

The more we can do as a global security services provider to identify or impact malicious traffic before it hits our customers’ infrastructure, the better customers can focus and prioritize their resources elsewhere. This is the promise of Connected Security and the premise upon which we have transformed our network into a threat sensor and proactive defense platform.

Disrupting the security threats that we face today — and the threats we will face tomorrow — requires more than intelligence. It requires a collective commitment to share what we see and to act on what we know. We look forward to continuing to work together as we drive toward simplifying security.

Click here to view and download the full CenturyLink 2019 Threat Report:


October is National Cybersecurity Awareness Month and as a CenturyLink Channel Partner, you have access to sell CenturyLink’s full suite of trusted Security Solutions. For more information, please contact your Channel Manager or


Posted on: September 24, 2019 By: Carolyn Kuczynski

About the only thing shifting as fast as the cyber threat landscape is the typical enterprise’s org chart. As enterprises aim to keep pace with the rapidly evolving digital economy, many are restructuring internal departments, hiring criteria and the processes by which they develop and distribute products, all with the overarching objective of becoming more proficient at rapidly responding to new opportunities in the marketplace.

In making these well-intentioned adjustments, the ability for enterprises to establish robust, broadly integrated cybersecurity as a core capability of their re-calibrated operation will be one of the best predictors of whether these changes will prove successful.

The Expanding Footprint of Data in the Enterprise

The degree of difficulty in achieving solid, enterprise-wide cybersecurity posture is difficult not only because cyber threats continue to grow in volume and sophistication, but because of the expanding footprint of data in the enterprise.

Call data the new gold, the new air, the new oil – whichever metaphor you prefer – and the reality remains that the need to leverage data is becoming increasingly essential across lines of business. That is one of the main reasons why security teams must not look at themselves as the sole implementer and enforcer of sound security practices, but rather spread security awareness and adoption of clear policies with their colleagues as an ongoing, sustained point of emphasis.

More than 8 in 10 respondents to ISACA’s research say that establishing a stronger culture of cybersecurity would increase their organization’s profitability, and this will only become more on-target as organizations increasingly embrace digital business models.

The rising profile of data analytics factors in heavily, as referenced in a recent McKinsey article, which noted that “as companies adopt massive data analytics, they must determine how to identify risks created by data sets that integrate many types of incredibly sensitive customer information. They must also incorporate security controls into analytics solutions that may not use a formal software-development methodology.”

The cloud is another area in which proactively bolstering security capabilities will be critical in the new enterprise environment. While cloud computing is certainly not new, turning to cloud providers has become increasingly attractive for many enterprises whose traditional server-based approach no longer is sufficient for storing and protecting their data.

Modern cloud platforms supply enterprises with an array of options that provide data storage and protection that can lead to dramatically improved scalability and flexibility. While new, sophisticated security capabilities are being integrated into today’s cloud platforms, these capabilities are not always integrated into organizations’ security programs, whether due to discomfort with trying new approaches or just the challenge of carving out time to explore them amid the usual, day-to-day challenges. This is a missed opportunity for enterprises to enhance their security programs and derive additional value from their investments in the cloud.

Turning DevOps into DevSecOps

 Another dynamic elevating the importance of broader integration of security principles is DevOps. In an era in which business velocity can reach a dizzying pace, enterprises have turned to DevOps to move faster and more efficiently in their builds, deliveries and deployments.

The problem is, security oftentimes is an afterthought in this process, which puts developers in the difficult position of trying to figure out security best practices on their own. Working security into the DevOps program – referred to as DevSecOps – allows the security team to become involved during the design phase and ensure that critical security flaws are identified and addressed before they require costly fixes that become increasingly costly later in the process.

Similarly, Agile development methodology needs to take cybersecurity considerations into account, such as ensuring that all data is properly categorized and that a comprehensive, risk-based approach to safeguarding the data is in place.

Historically, we have seen enterprises are typically more attentive to positioning themselves to sell products and increase revenue than to protecting themselves and their customers from security threats. But as we near a new decade – the 2020s – the pace at which enterprises will realign to thrive in a technology-driven digital economy will only accelerate. We remain in the early stages of this era of digital transformation.

Consider the way technologies such as artificial intelligence/machine learning, robotics, and the ongoing proliferation of connected devices will create new business opportunities that result in new methods of product development and ushering products to market. Anything less than deeply ingrained cybersecurity throughout the enterprise will not work going forward.

By integrating sound cybersecurity practices in all areas of the organization, implementing new security capabilities that are baked into modern cloud services and turning DevOps into DevSecOps, enterprises will have the flexibility to re-imagine their business models while retaining a stable foundation on which to innovate.

Interested in learning more about the biggest trends in cybersecurity? Read CenturyLink’s 2019 Threat Report.

For more CenturyLink blog content, visit the NetNext blog at

Posted on: September 19, 2019 By: Carolyn Kuczynski

Redundancy and availability are two critical aspects of any network design. They provide metrics to evaluate the robustness of your network during a non-ideal scenario providing business continuity and minimizing end-user impact.

Redundancy refers to the duplication of network elements or functions such as servers, power supply, cabling, fans, etc. that continue operations during a system failure or voluntary maintenance. Availability pertains to unplanned downtime defined by time metrics indicating the tolerance level on how long a network can be offline or unavailable.

BullsEye Telecom adheres to industry standards and provides a best-case scenario of 2N redundancy with two geographically redundant data centers offering systems and services accessible at any time and four nines availability establishing a strict provision of only 52 minutes, 36 seconds downtime per year.

Customers are known to opt for a VoIP service provider who advertises such data regarding their infrastructure, but how many of them challenge the provider’s stance? It is not only important for any provider to embrace the standards as a fundamental requirement of network design, but also perform periodic audits to test the validity of their solution.

BullsEye Telecom is proud to have implemented the practice of testing our VoIP network 5 years ago. Ever since we perform the VoIP disaster recovery (DR) test annually to ensure compliance with advertised metrics and publish detailed reports on the outcome of our testing.

The VoIP DR test is usually scheduled early Q2 during a pre-published maintenance window on a Sunday morning between 2.00 AM to 7.30 AM. Customers are notified of the event in advance to prepare for the testing. During the maintenance, a controlled primary data center failure is simulated to gracefully transition all telephony services to the secondary node.

A detailed test plan is executed before simulation, post-secondary failover and post-primary failback with extensive data and log collection. The test plan includes every scenario of all VoIP solutions offered by BullsEye and validates the network interaction for each case. The collected data is reviewed in detail and shortcomings, if any, are identified and addressed before the next annual DR.

All findings with data captures are collected and published within the organization with accurate timelines. Customers seeking information about how their services were handled during the transition can request for a copy of the DR report which is shared transparently. While most of the service providers boast about their network redundancy and availability BullsEye Telecom demonstrates it!

For information on how BullsEye can help your organization in the event of a disaster, send a message or give us a call at 877-638-2855

Posted on: By: Carolyn Kuczynski

Car dealerships have a stack of specific needs for internet connectivity. Customer credit checks, bank rates, and loan approvals, even coordinating with the local DMV — all require a dependable internet connection. On average, companies in the U.S. experience network outages five times per month — that’s 60 periods of downtime each year! According to a recent IHS study, outages cost more than $700 Billion a year.

Beyond those car dealership specific needs, odds are a local dealership — or regional auto group — relies on the internet for a whole slate of cloud-based applications. Office 365, G Suite, cloud-based inventory control and ordering, and even online marketing. In today’s world, chances are any business is going to be hampered by a broken internet connection.

That’s where CellCast enters the picture.

CellCast 4G-LTE

In the past, solving network connectivity issues could be a complicated matter of determining providers, technologies, capabilities, and costs for any given practice. As a backup to your existing network, CellCast: FailOver is easy to configure and install and transforms any network location into a fully managed solution.

For locations with no wired internet options, CellCast: Primary delivers high-speed 4G-LTE internet that is cost-effective and can be installed almost anywhere.

Microspace: The Difference

Some of the main differences between CellCast and other 4G solutions include superior network management and award-winning white glove support. Real-time status and health reporting of each CellCast router back to the Network Operations Center in Raleigh, N.C., gives Microspace the capability to react quickly and effectively to keep your office

Beyond router and network status, each device can also be remotely accessed for security and firmware updates — keeping your network up to date and secure.

With available month-to-month billing CellCast: Primary is also an ideal solution to delayed network installations or ongoing service issues. In emergency situations, routers can even be deployed and connected overnight.

Solution Brief: Sales Goals

The folks at Smith Auto Group are closing in on their monthly sales goals. Several salespeople are working with customers. More than a couple have deals made, and now it’s just a matter of confirming trade value, checking credit, getting loan rates, and producing paperwork for the DMV.

Things are going well until their internet service provider crashes. Without internet connectivity, these sales can’t happen. Connectivity to CRM information is lost. Smith Auto also loses out on financing deals and closing sales because they can’t process paperwork online. They can’t even make phone calls because it’s a bundled service.

This is a scenario that’s ideal for CellCast: FailOver. A fullymanaged, CellCast router automatically takes over. The dealership stays connected and doesn’t miss a beat when CellCast takes over. When the local ISP gets their outage resolved, the system resets automatically. True business continuity.

For dealerships with multiple locations, Microspace can even arrange for shared data pools across locations to maximize the value of an inexpensive, managed failover system.

Posted on: September 16, 2019 By: Carolyn Kuczynski

Most people have heard the term “MSP.” While a majority understands that it stands for “Managed Service Provider,” very few have a good grasp of what it really entails. As a result, many customers have ill-informed expectations.

To help clear up any misunderstanding, I spoke with Erik Nordquist, TPx’s Product Manager for MSx Security Services and MSx Datacenters.

Erik, what do TPx customers really get with our Managed Firewalls and Managed SD-WAN solutions?

For Managed Firewalls, our security specialists monitor customers’ firewalls 24/7 in our own SOCs (Security Operations Centers) to make sure the firewalls are up and operating.  In case it’s down, we open a ticket and engage the customer to make sure it isn’t an ISP issue.  If it’s the ISP, then we work internally to make sure there isn’t an issue with our circuit.

When we provision the firewall, we make sure it is provisioned properly and that it adheres to best practices.  We make all the necessary changes but keep the old configurations, with the new configurations being stored off-site in case there is a need for an audit or for disaster recovery.

If any vulnerabilities are discovered (not common, but can happen), then we update the firmware to close off the vulnerability.  If there are bugs, we work to resolve them.  If the hardware has issues, we work to get the hardware replaced.  The customer never needs to notify the vendor – we do that for them.  If any scans or compliance issues come up, we help resolve those with the customer.

If there is an issue on the network, we troubleshoot for the customer to best determine where the problem is.  We also provide reports about what is happening on the customer’s network – for instance, what their web usage looks like.  With the MSx Optimum Firewall service, we maintain customer traffic logs for 4 months.

All of these things can happen at any time – day or night.  If a customer wants to make a change or just has a question, they can open a service ticket, send an email, or call the SOC team to speak with someone.

When we are managing a third-party circuit, since we are an authorized contact, TPx can open tickets if there are problems with the circuit and can work with the provider to troubleshoot the issues. This way the customer only has one company to contact and doesn’t have to lose time by dealing with multiple parties.

Can you share some more benefits of managed services?

We deal with most issues that come up, which frees up the customer to concentrate on other areas.  In general, this is what managed services is all about.  An IT person may have general knowledge in all areas but not really specialize in one single area.  TPx has experts in all areas that we manage, and we have the systems in place to offer enterprise-level services that would otherwise not be available to smaller businesses.  Instead of hiring expensive IT people that are hard to find these days, customers can look to us to do this work.

Why should businesses choose TPx over other Managed Services Providers?

TPx is large enough to get the job done right and agile enough to get it done on time. TPx puts an emphasis on using market-leading technology while providing excellent customer service around the clock. Our services portfolio is designed to be a one-stop-shop for IT and security, so customers can eliminate the complexity and headaches that come with dealing with multiple vendors. Very few providers in the U.S. can offer a product portfolio scope of a nationwide managed services carrier like we do. Our trained and experienced staff watches over our customers 24/7/365 so that if an issue arises, it is resolved quickly and effectively. Our solutions are designed to provide enterprise-level quality and customization without an enterprise price tag.  We offer incredibly flexible cost options for customers, based on their service level needs and service commitment lengths.


Thanks for your insights, Erik!

You may feel overwhelmed with all the cybersecurity and IT pressures of today’s digital environment, but there is a light at the end of the tunnel. Let us give you a hand with your IT and security – schedule a free consultation with our specialists or call 888-407-9594.


About the Author

Lucie Hys is a Senior Product Marketing Manager at TPx. She is currently leading the marketing efforts for the company’s MSx suite of managed services. She has been working in marketing for more than 9 years, with the last four focusing on the cybersecurity industry. Lucie graduated with an MBA from Florida Gulf Coast University. In her spare time, she is an avid fitness enthusiast and a passionate traveler. 

Posted on: September 4, 2019 By: Carolyn Kuczynski

In this two-part blog, we will explore and define Recovery Time Objective (RTO) and Recovery Point Objective (RPO). In part one, we will examine RTO (Recovery Time Objective).

What is RTO and RPO?

Recovery Time Objective and Recovery Point Objective may sound alike, but they are entirely different metrics in disaster recovery and business continuity management.

Calculating your RTO and RPO allows you to plan accordingly with the proper resources, before you need them. In this blog post, we will examine RTO and clear up any confusion.

RTO: Recovery Time Objective

RTO dictates how quickly your infrastructure needs to be back online after a disaster. Sometimes, we use RTO to define the maximum downtime a company can handle and maintain business continuity. This is often a target time set for services restoration after a disaster. For example, a Recovery Time Objective of two hours aims to have all servers with that RTO back up and running within two hours of service disruption notification.

In the case of a healthcare organization for example, they might ask themselves the following questions when determining RTO for their applications and data:

  • For hosted email servers: How long can we go without accessing our email without impacting the business?
  • For patient record storage: How quickly do we need to provide access to patient records to maintain compliance?
  • Operational applications: Which servers are critical to business operation? How quickly do we need each restored before serious impact to the business?

Depending on your business requirements, you may need better RTO for certain data and applications. With lowered RTO comes an increase in cost, though. Companies must balance downtime with business impact to ensure the RTO is appropriate. Whatever RTO you choose, it should be cost-effective for your organization.

Whether you need geographic redundancy, virtual backups, or a combination of the two, Data Canopy can help you develop the plan that keeps your business running when disaster strikes.

Ensure mission critical data is secure and available in the event of an emergency with a disaster recovery plan and solution designed for your business. Data Canopy offers geographical redundancy from data centers nationwide, full encryption and corruption detection capabilities, and virtual server backups for seamless fail-over in the event of an outage.

Posted on: July 24, 2019 By: Carolyn Kuczynski

Today’s customers are eager for solutions. Business-owning customers want to spend their time running their business and not on solving complicated IT problems; they WANT to pay you to make IT easier.

That all sounds great, but again — how do you secure those clients?

Iteration. MCSPs must constantly communicate with their clients in a more personal way. Dashboards, reports, email blasts, automated tickets, and generic vCIO content is great. However, it is not enough to create a tailored solution with the complexity required at this point. Clients need a plan; they need to be able to absorb this massive transition slowly. You must create a progressive technology plan that takes them from where they are to where they need to be, leading to higher acceptance and better retention.

Start with your knowledge of their business. If you don’t have this knowledge, get it. Based on their vertical, their maturity and their concerns, start with what matters most. Compliance? Data Security? DR? Mobility? Scalability? Pick something to be the hub of your plan; something that justifies all the change and the necessary action for the client, or also justifies the early steps that don’t seem like they are immediately solving a problem. It won’t be the same for all clients. It needs to manage their concerns and reduce anxiety around the coming changes. In other words: solve a problem. Give them a plan that makes their business more efficient, not just cloudification. Once you have this, communicate, communicate, communicate — not just QBRs or automated communications. Sell the plan, get their buy-in and share what’s next and why it’s important. Remind them why this is happening every step of the way.

The critical steps will be the following, regardless of your justification:

Identity management. You are going to be distributing their services to the best place for the job, but this can’t add 20 different logins to their daily life. As you roll out the rest of the plan, start with single sign-on and access control from the beginning. As a bonus, select a provider that adds SaaS utilization management so that you can be efficient with the clients’ spend on SaaS — Okta and MetaSaaS, for instance.

Accelerate. Implement SD-WAN for reliable and responsive connectivity to the cloud – VMware’s Velocloud for instance. This will reduce the time that you spend managing the network connections that are critical to the solution, and it will keep the experience solid as they rely more heavily on the cloud via their WAN.

Secure. Secure the solution with a managed NGFW and SOC solution. Protect the endpoints — don’t just trust a firewall, no matter how next-generation it may be. Belts and suspenders. You want to start out secure, not by bolting it on after a breach or compromise. This is the first step that will feel like they are making progress. If this isn’t done right early, it will lead to similar failures as discussed above with the WAN. Cloud is inherently secure to end-users. You don’t want to misstep and have them question the solution mid-way.

SaaS offload. Find the needs best served by SaaS. No need to migrate a legacy app that is in need of a refresh and unable to realize the promise of the cloud due to its shortcomings from age. Don’t force it. Ask yourself, “Does the SaaS alternative really solve their problem?”

Migrate. Migrate their legacy apps to IaaS. Migrate their desktops to DaaS or a workspace solution. You won’t be able to replace everything with SaaS. It’s not the best solution for every workload and forcing it will just decrease the clients’ efficiency and happiness with the solution. DaaS and IaaS will give their legacy applications the SaaS-like feel of mobility and accessibility. One more note: Don’t force DaaS until everything else is in order. It’s another place you can undo a lot of trust if the predecessor tasks are not solid and complete.

Protect. Don’t forget a DR and backup strategy. That’s another place that clients think is magic in the cloud. Backup SaaS data, replicate IaaS data to multiple regions. Have a DR strategy for remote working. Don’t undersell the value of having a DR plan for not only major natural disasters but things like holidays, inclement weather, moving offices or growing quickly.

Measure and improve. The cloud offers an endless stream of information about your clients’ workloads. Use this technology to continually improve through discussions of changes to their business, growth of resources continued migrations to SaaS, auditing, etc.

Above are some tools to help with the planning of such a strategy and communicating the value. It’s time to evolve. It’s time to change the game again. You will differentiate yourself and secure long-term clients.

Posted on: July 10, 2019 By: Carolyn Kuczynski

To provide its clients with greater flexibility and choice in terms of Disaster Recovery solutions, Expedient just recently launched a new disaster recovery as a service (DRaaS) offering based on VMware vCloud Availability – a unified solution built to offer simple, more secure, and cost-effective onboarding, migration, and disaster recovery services for multi-tenant VMware clouds.

“By expanding our disaster recovery solutions to new include replication powered by VMware vCloud Availability, Expedient is furthering our commitment to enabling organizations of all sizes to protect their workloads from unexpected events,” said John White, Chief Innovation Officer for Expedient. “We recognize that organizations have workloads that require varying levels of protection from disaster, and we continue to answer those needs with an evolving suite of disaster recovery solutions.”

Cloud-based disaster recovery is one of the fast-growing industry segments for cloud services. VMware vCloud Availability gives cloud providers the ability to capitalize on this demand and deliver increased choice to end users. Additionally, as enterprises increasingly implement hybrid cloud strategies, vCloud Availability provides an opportunity for cloud providers to deliver simple, integrated migration and onboarding services to the cloud and from cloud to cloud.

Over the last decade, Expedient – a Platinum-level CNSG supplier – has helped hundreds of companies protect business-critical data and mitigate risk with its turnkey disaster recovery services. For three years running, Gartner has positioned Expedient, in its Magic Quadrant for DRaaS. The report analyzes the strengths and weaknesses of the 11 leading vendors in the growing DRaaS industry. Download your free copy of the 2019 Magic Quadrant report courtesy of Expedient.


About Expedient

Expedient helps companies transform their IT operations through award-winning cloud solutions and managed services including disaster recovery, security and compliance, and more. Named VMware’s Cloud Partner of the Year and acknowledged in Gartner’s Magic Quadrant for Disaster Recovery as a Service, Expedient’s solutions and services ease the transition to the cloud, enabling organizations to focus on strategic business innovation while the Expedient team handles the operation of the information technology needed to support it. Learn more at