AWS Partner Network (APN) Blog

New AWS CloudFormation Stack Quick-Create Links further simplify customer onboarding

Post by Ian Scofield and Erin McGill

We recently wrapped up a series (Parts 1, 2, 3, and 4) on using AWS CloudFormation to ease the creation of cross-account roles during customer onboarding. It takes the reader through creating custom launch stack URLs for AWS CloudFormation, using an AWS Lambda function to generate a custom template with individualized parameters, and automatically sending the Amazon Resource Name (ARN) of the created cross-account role back to the SaaS owner. The process removes many of the manual steps involved in the creation of a cross-account role and the associated policy documents, reducing the chances of failure.

Although this solution simplified the workflow and helped reduce failure rates during onboarding, there were still two areas open to improvement:

  • We required the SaaS owner to customize each customer’s template and hardcode values. These templates needed to be stored, shared publicly, and then promptly deleted.
  • The AWS CloudFormation wizard contained multiple pages, and partners told us they wanted to streamline this process.

At AWS, we listen to our customers and partners to learn where we can improve, and our roadmap is almost exclusively driven by customer feedback. Based on the feedback we received on the customer onboarding process, I am pleased to announce that the AWS CloudFormation team has added the Stack Quick-Create Links feature which solves the issues we outlined above.

  • Embedding parameters in the launch stack URL – The AWS CloudFormation team has removed the need to store customized templates by adding the ability to embed parameter values directly in the launch stack URL.
  • Streamlined launch stack wizard – Users will now be directed to an AWS CloudFormation wizard that has been reduced to a single page.

Embedding Parameters in the launch stack URL

A launch stack URL makes it easy for customers to launch AWS CloudFormation templates by sending them straight to the AWS CloudFormation wizard with the template location and stack name pre-populated.

As a refresher, the URL looks like this:

https://console.aws.amazon.com/cloudformation/home?region=region#/stacks/new?
stackName=stack_name&templateURL=template_location

In the scenario we outlined in our series, we used a launch stack URL to help customers launch an AWS CloudFormation template and create a cross-account role in their AWS account. The template associated with the URL contained unique, customer-specific values for the trusted account ID and external ID, and needed to be generated  for each customer. The template was then hosted in an S3 bucket until the customer launched it.  We also required a cleanup method to ensure that templates didn’t remain accessible post-launch. However, this process was burdensome on the partner and required additional infrastructure, including multiple Lambda functions and an S3 bucket, to execute.

We discussed these challenges with the AWS CloudFormation service team, and they worked hard to resolve this problem and released a feature that lets you embed parameter values in the launch stack URL. This enables us to specify unique values for the trusted account ID and the external ID directly in the URL, which allows for the template to be generated on the fly. The partner no longer has to create, store, and ultimately delete the templates. In order to embed your parameters, just prepend the parameter name with param_ followed by your name=value pair.

The new syntax looks like this:

https://console.aws.amazon.com/cloudformation/home?region=region#/stacks/create/review?
stackName=stack_name&templateURL=template_location&param_name1=value1&param_name2=value2

Here’s an example URL that we can use in our Cross-Account Role scenario:

https://console.aws.amazon.com/cloudformation/home?region=region#/stacks/create/review?
stackName=stack_name&templateURL=template_location&param_ExternalId=abcd1234&param_TrustedAccount=123456789012

Streamlined Launch Stack Wizard

You may have noticed above that another part of the URL has changed.  The /stacks/new part of the URL has changed to /stacks/create/review.  This new feature streamlines the AWS CloudFormation wizard to remove additional pages for certain use cases. Every partner strives to make the onboarding experience as quick and smooth as possible to reduce the risk that the customer will abandon the onboarding process.

Our earlier process required the customer to navigate through four separate sections, like this:

Figure 1: Traditional AWS CloudFormation launch stack wizard

 

When you change /stacks/new part of the URL to /stacks/create/review in your launch URL, customers will be greeted by a single review screen that doesn’t require them to click Next on any pages. If there are any additional parameters that don’t need to be pre-populated, they will be present here as well for the user to fill out.  All they need to do is click the Create button at the bottom of the screen.

Figure 2: Streamlined AWS CloudFormation launch stack review page

 

As you can see, this drastically streamlines the process and enables a much quicker and smoother workflow for onboarding customers.

Here’s an example URL that we can use in our cross-account role scenario to generate the screenshot in Figure 2:

https://console.aws.amazon.com/cloudformation/home?region=us-east-1#/stacks/create/review?templateURL=https://s3-us-west2.amazonaws.com/isco/wizard.yml&stackName=CrossAccountRoleSetup&param_TrustedAccount=123456789012&param_ExternalId=abcd1234

Note: This feature doesn’t currently support NoEcho or password parameters for security reasons.

 

Try it Out

With the addition of embedding parameters in the URL and the streamlined wizard, customers have a faster, smoother onboarding experience, and partners need less infrastructure to manage custom workflows.  To learn more about these features, check out the AWS CloudFormation documentation.

Feel free to try out these two new features and let us know your thoughts in the comments below.  If you have any ideas on how to make any of our services better or to improve the customer experience, please reach out and let us know!

Unlocking Hybrid Architecture’s Potential with DevOps

Last week in our MSP Partner Spotlight series, we heard from Jeff Aden at 2nd Watch and learned about the value that next gen MSPs can bring to their customers through well managed migrations and through 2nd Watch’s Four Key Steps of Optimizing Your IT Resources. Another area of new value that AWS MSPs can bring to their customers is management of their hybrid IT architecture, allowing customers at any stage of the cloud adoption journey to best leverage the AWS Cloud. This week we hear from Datapipe (APN Premier Partner, MSP Partner and holder of several AWS Competencies and AWS Service Delivery designations) as they discuss their approach and considerations in supporting their customers’ hybrid architectures.

Unlocking Hybrid Architecture’s Potential with DevOps

By David Lucky, Director of Product Management at Datapipe

Hybrid IT architecture, or what many customers call hybrid cloud, is increasingly prevalent in today’s fast-paced technology industry. Over the past few years, Datapipe has seen an initial reluctance towards cloud adoption transform into excitement, and hybrid architecture is emerging as a go-to solution for enterprise organizations looking for a way to manage their complex operations and run AWS as a seamless extension of their on-premises infrastructure.

Hybrid architecture gives organizations Application Programming Interface (API) accessibility, providing developers with programmatic access to control their environments through well-defined methods. APIs, commonly defined as “code that allows two software programs to communicate with each other,” are increasing in popularity in part due to the rise of cloud computing, and have steadily improved software quality over the last decade. Now, instead of having to custom develop software for a specific purpose, software is often written referencing APIs with widely useful features, which reduces development time and cost, and alleviates risk of error.

With API accessibility, developers can easily repurpose proven APIs to build new applications instead of having to manage them manually. This gives them more room to experiment and innovate and creates a culture of curiosity. In this way, the API accessibility of hybrid architecture leads to a necessary rebalancing of development and operations teams looking to solve problems earlier and more automatically than was previously possible with purely on-premises solutions.

To maintain the culture of curiosity that’s enabled by API accessibility through hybrid environments, we recommend organizations remove the silos that traditionally separate development and operations teams, and encourage open communication and collaboration – better known as DevOps. Implementing a DevOps culture helps organizations take advantage of a hybrid infrastructure to increase efficiencies along the entire software development lifecycle (SDLC). At Datapipe, we understand how critical the adoption of DevOps methodologies and agile workflows are for IT organizations to remain competitive and respond to the constantly evolving technology landscape. It’s the reason we expanded our professional services to include DevOps, and why we help organizations make the cultural switch to DevOps the right way, starting with people.

Individuals Over Tools

While many people conflate DevOps with an increase in automation tools, an organization can’t fully realize DevOps culture without starting with its people. A DevOps culture fosters open communication and constant collaboration between team members. It dissolves barriers between operations and development departments, giving everyone ownership over the SDLC as a whole, beyond their traditional, individual responsibilities. Being able to see the big picture allows team members to transition from being reactive to being proactive. That, in turn, involves shifting away from addressing problems as they arise to determining the root cause of the problem and finding a solution as a part of a continuous improvement mindset. Organizations that fully embrace this full-stack DevOps approach can provision a server in minutes instead of weeks, which is a vast improvement on the traditional SLDC model.

This mindset also means moving from a reactionary approach and solving problems through “closing tickets” to a proactive approach that involves consistently searching for inefficiencies and addressing them in real-time, so an organization’s software is continually improving at the most fundamental levels. Of course, addressing inefficiencies in the software also means addressing inefficiencies in workflows, which leads to the use of DevOps tools such as automation and writing reusable utilities.

However, productivity tools won’t increase efficiency on their own. An effective DevOps culture starts with open collaboration between team members, and then is reinforced by tools. At Datapipe, we see incorporating a DevOps culture through the lens of the “Agile Manifesto,” which promotes “individuals and interactions over processes and tools.” When you combine agile working practices with DevOps, you can manage change in a feature-focused manner, providing faster interaction and response. Managing change in this way means that organizations achieve their goals through a strong DevOps culture that automates the majority of the overall development and delivery process, enabling teams to focus on areas that create a differential experience. This takes time – and collaboration among team members – to set up. The real-time collaboration that marks a full-stack DevOps approach reduces the number of handoffs in a SDLC, thus accelerating the entire process, and decreasing an applications’ time-to-market.

Looking Ahead

Hybrid architecture growth is expected to continue. Industry analyst firm IDC predicts that 80 percent of all enterprise IT organizations will commit to hybrid architecture by the end of this year. This prediction is in line with what we’re seeing from our customers. As a next-gen MSP, we’ve seen an increase in enterprise companies looking for guidance on incorporating a DevOps culture to complement their digital transformations.

Take our work with British Medical Journal (BMJ), for example. BMJ started out over 170 years ago as a medical journal. Now, as a global online brand, BMJ has expanded to encompass 60 specialist medical and allied sciences journals with millions of readers. As a result of their dramatic growth, their old infrastructure could no longer support their application release process. In addition, as an increasingly global organization, BMJ’s capacity for allowing downtime – scheduled or otherwise – was diminishing. To solve this problem, BMJ needed to move to a sustainable development cycle of continuous integration and automation, which is only possible through a shift to a DevOps type culture. We helped BMJ implement this culture while assisting with changes to their infrastructure. The switch to a more open, collaborative culture not only allowed BMJ to implement a sustainable development cycle, complete with continuous integration and automation, but it also made them feel better prepared to take their next planned step of moving workloads to the AWS Cloud and embracing a hybrid environment. (More about how we helped BMJ move to a DevOps-oriented culture can be found here).

If you’re interested in leveraging DevOps to get the most out of your hybrid environment, we recommend starting with the following considerations:

  • Leverage object-oriented programming principles such as abstraction and encapsulation to build re-usable and parameterized components that can be assembled like building blocks. This can be done in configuration management with Chef Recipes, Puppet Modules, and Ansible Roles, or through infrastructure building blocks like Terraform Modules and AWS CloudFormation scripts.
  • When automating infrastructure management, test destruction as deeply as the creation process. This will give you the ability to iterate and test cleanly.
  • Balance the effort being put into upfront engineering versus operational management activities. More upfront engineering unlocks some great features with Auto Scaling on AWS. For more steady-state applications, the resources needed to set up and configure can sometimes be much less than the effort of working through automation. This makes it worthwhile to look for open-source modules to help you in your infrastructure and configuration management workflows.
  • For Auto Scaling groups within AWS, consider, as you engineer your process, the time tolerances your workload has from the time when AWS detects the need for a new instance to when they are fully operational. Fully-baked Amazon Machine Images tend to be the fastest time to operational, but this would require building an image for every version of your application. Packer is a great tool for this purpose. In addition, the more you embed user data or configuration management processes, the longer your instance will take to reach an operational state. Finally, keep in mind processes like domain joins and renaming of instances, which require reboots, can add time to the launch process and use them as sparingly as possible.
  • For a low-latency link between your resources in and out of the cloud, consider taking advantage of higher-level services like AWS Direct Connect, which provides a virtual interface directly to public AWS services and allows you to bypass Internet service providers in your network path. Datapipe client ScreenScape used Direct Connect to link their on-premises environment to Amazon CloudFront for a cloud environment that’s highly available, fully managed, and able to scale over time with proven capability. (Learn more here.)

Hybrid architecture offers organizations the power of both on-premises and cloud environments like AWS, giving them the tools to grow and innovate at a lower cost. For companies to fully capitalize on the benefits of these mixed environments, a culture change is necessary. By shifting to a DevOps culture and enabling teams to work together in a full-stack perspective, organizations can not only increase efficiency in their SDLCs, but also open up opportunities for immense engagement and creativity – qualities necessary for innovation. A next-generation MSP, with DevOps and Software-as-a-Service (SaaS) capabilities, can be a valuable guide for IT teams on their hybrid cloud journey. At Datapipe, we pride ourselves on being a next-generation MSP, and our proficiency with DevOps was a key differentiator that led to our position as a leader in the 2017 Gartner Magic Quadrant for Public Cloud Infrastructure Managed Service Providers, Worldwide. By partnering with a next-gen MSP, like those included in AWS Managed Service Partner program, organizations don’t have to make the shift to DevOps on their own.

To get started or for assistance on your cloud journey, contact us at www.datapipe.com

David Lucky

Director of Product Management

www.Datapipe.com

Why Next-Generation MSPs Need Next-Generation Monitoring

We wrote a couple of months ago about how ISVs are rapidly evolving their capabilities and products to meet the growing needs of next generation Managed Service Providers (MSPs), and we heard from Cloud Health Technologies about how they are Enabling Next-Generation MSPs with cloud management tools that span the breadth of customer engagements from Plan & Design to Build & Migrate to Run & Operate and to Optimize. Today we are sharing a guest post from APN Advanced Technology and SaaS Partner, Datadog, as they address the shift from traditional to next gen monitoring and how these capabilities elevate the level of value that an MSP can deliver to their customers.

Let’s hear from Emily Chang, Technical Author at Datadog.

Why Next-Generation MSPs Need Next-Generation Monitoring

To stay competitive in today’s ever-changing IT landscape, managed service providers (MSPs) need to demonstrate that they can consistently deliver high-performance solutions for their customers. Rising to that challenge is nearly impossible without the help of a comprehensive monitoring platform that provides insights into customers’ complex environments.

Many next-generation MSPs team with Datadog to gain insights into their customers’ cloud-based infrastructure and applications. In this article, we’ll highlight a few of the ways that MSPs use Datadog’s monitoring and alerting capabilities to proactively manage their customers’ increasingly dynamic and elastic workloads with

  • Full visibility into rapidly scaling infrastructure and applications.
  • Alerting that automatically detects abnormal changes.
  • Analysis of historical data to gain insights and develop new solutions.
  • Continuous compliance in an era of infrastructure-as-code.

Full visibility into rapidly scaling infrastructure and applications

As companies continuously test and deploy new features and applications, MSPs need to be prepared to monitor just about any type of environment and technology at a moment’s notice. Whether their customers are running containers, VMs, bare-metal servers, or all of the above, Datadog provides visibility into all of these components in one place.

Datadog’s integration for Amazon Web Services (AWS) automatically collects default and custom Amazon CloudWatch metrics from dozens of AWS services, including Amazon Elastic Compute Cloud (Amazon EC2), Elastic Load Balancing, and Amazon Relational Database Service (Amazon RDS). In total, Datadog offers more than 200 turn-key integrations with popular infrastructure technologies. Many integrations include default dashboards that display key health and performance metrics, such as the AWS overview dashboard shown below.

 

 

MSPs need the ability to monitor every dimension of their customers’ modern applications—as well as their underlying infrastructure. As customers continuously deploy new features and applications in the cloud, MSPs can consult a global overview of the infrastructure with Datadog, and then drill down into application-level issues with Application Performance Monitoring (APM), without needing to switch contexts. Datadog APM traces individual requests across common libraries and frameworks, and enables users to identify and investigate bottlenecks and errors by digging into interactive flame graphs like this one:

 

Infrastructure-aware APM gives MSPs full-stack observability for their customers’ applications, which is critical for troubleshooting bottlenecks in complex environments.

Alerting that automatically detects abnormal changes

Because today’s dynamic cloud environments are constantly in a state of flux, MSPs can benefit immensely from sophisticated alerts that can distinguish abnormal deviations from normal, everyday fluctuations. As customers’ infrastructure rapidly scales to accommodate changing workloads, what constitutes a normal/healthy threshold often will need to scale accordingly. Customers may also wish to track critical business metrics, such as transactions processed, which often exhibit normal, user-driven fluctuations that correlate with the time of day or the day of the week.

Both of these scenarios explain why threshold-based alerts, while helpful for many types of metrics, are not ideal solutions for detecting more complex issues with modern-day applications. To accommodate these challenges, next-generation MSPs need a monitoring solution that uses machine learning to automatically detect issues in their customers’ metrics. Datadog’s anomaly detection algorithms are designed to distinguish between normal and abnormal trends in metrics while accounting for directional trends, such as a steady increase in transaction volume over time and seasonal fluctuations.

Datadog also uses machine learning for outlier detection—algorithms that determine when a host or group of hosts behaves differently from its peers. This effectively enables MSPs to make sense of how resources are being used within a customer’s infrastructure, even as it rapidly scales to accommodate varying workloads. Whenever an outlier monitor is triggered, MSPs can consult the monitor status page, like the one shown below, to quickly understand when the outlier was detected, and which component(s) of the infrastructure it may impact.

 

Analyzing historical data to gain insights and develop new solutions

As their customers’ environments scale and grow increasingly complex, MSPs need an effective way to visualize how all of those components change over time. For historical analysis, all data is retained at full granularity for 15 months. This allows MSPs to analyze how their customers’ infrastructure and applications have evolved and develop strategies that help them make strategic decisions going forward. In addition to visualizing AWS services and other common infrastructure technologies in default dashboards, MSPs can create custom visualizations that deliver deeper insights into their metrics. These visualizations include:

  • Trend lines: Use regression functions to visualize metric trends
  • Change graphs: Display how a metric’s value has changed compared to a point in the past (an hour ago, a week ago, etc.)
  • Heat maps: Use color intensity to identify patterns and deviations across many separate entities. In the example below, a Datadog heat map shows Docker CPU usage steadily trending upward across a large ensemble of containers

 

Ensuring continuous compliance in an era of infrastructure-as-code

Infrastructure-as-code has revolutionized the way that organizations deploy new assets and manage their existing resources, enabling them to become more agile, continuously deploy new features, and quickly scale resources to respond to changing workloads. However, as these tools are more widely adopted, they also require organizations to monitor their assets more carefully, in order to meet compliance requirements.

Datadog integrates with key infrastructure-as-code tools like Chef, Puppet, and Ansible to provide MSPs with a real-time record of configuration changes to each customer’s infrastructure. Datadog also ingests AWS CloudTrail logs to help MSPs track API calls made across AWS services and aggregates them in the event stream for easy reference. In the example below, you can see that CloudTrail reports any successful and failed logins to the AWS Management Console, as well as any EC2 instances that have been terminated—and who terminated them.

 

With all of this data readily available, MSPs can track critical changes as they occur in real time and set up monitors to proactively audit and enforce continuous compliance of their customers’ AWS environments. They can also search and filter for specific types of changes in the event stream and then overlay them on dashboards for correlation analysis, as shown below.

 

Event-based alerts help MSPs automatically detect unexpected changes and/or immediately notify their customers about events that may endanger compliance requirements. These alerts can also be configured to trigger actions in other services through custom webhooks. By making all of this information available in one central location, Datadog prepares MSPs with the data they need to respond quickly to compliance issues.

Next steps for next-generation MSPs

Datadog is pleased to be able to provide monitoring capabilities that help MSPs navigate the challenges of delivering high-performance solutions for dynamic infrastructure and applications. To learn more about how Datadog helps fulfill AWS MSP Partner Program checklist items needed to apply for the AWS Managed Service Program, download our free eBook. You can also view a recording of our recent webinar with AWS and CloudHesive, “What is Means to be a Next-Generation Managed Service Provider” here.

Cloud Migration: Measurement at the Moment Of Truth

By Lee Atchison, Sr. Director of Strategic Architecture, New Relic

This is a guest post from New Relic, an Advanced APN Technology Partner and AWS Migration Competency Partner,

When migrating applications to AWS, it doesn’t matter if you are rehosting, replatforming, or planning a full refactor—as you take each step, you should know what’s working. Above all, you need to know if the user  experience of your migrated app is what you expect it to be—especially at that moment when the workload has moved and everyone is watching.

How can you tell that a migration has successfully completed? How can you be sure that you haven’t introduced a problem or a looming concern into your application? How do you determine that you don’t need to do any further tuning or adjusting to ensure that your app that was previously running fine on premises is stable and performing well in the cloud?

Ultimately, you can’t declare a migration successful until you’ve proven it works as expected in the new environment.

Defining Your Acceptance And Performance Goals

A big part of determining whether a migration has been successful or acceptable is to consider the user expectations, costs, and tolerances for application performance when the migration is complete. For example, the acceptance criteria for determining when a migration focused on cost reduction is complete might look something like this:

  • Response time to end users should be within 5% of pre-migration levels.
  • Steady state (i.e., non-peak) server utilization should be within 20% of pre-migration levels.
  • Steady state (i.e., non-peak) infrastructure costs should be reduced by at least 10% compared to pre-migration levels.
  • Service error rates should be equal to, or lower than, pre-migration levels.
  • Application availability levels and performance should meet or exceed pre-migration levels.

Your plan will differ, depending on your migration goals. For more critical applications, you may have more goals with tighter tolerances. For less critical applications, you may have fewer and more general goals. You may be migrating to serve spiky traffic loads more easily, to allow the creation of additional data centers, to reduce infrastructure management burdens, or even to gain access to higher-performing servers. Whatever your goals, your plan should:

  • Specify and quantify the desired improvements you expect from the migration.
  • Identify other critical application metrics that you want to make sure remain acceptable and do not deviate a significant amount during the migration.
  • Indicate that error rates and error types may not change significantly after the migration, to make sure that potential new errors have not been introduced into your system.

Make sure all the acceptance criteria goals you have established for yourself are goals that you can measure and monitor independently from any infrastructure. This is where New Relic comes into play. New Relic has a set of real-time monitoring capabilities that can measure the customer experience, code execution, and infrastructure behaviors that all report into a common platform. For example, in situations where you are moving a workload from an under-provisioned infrastructure to AWS, an improvement in user experience can be measured and proven. (see Figure 1)

Figure 1 – When moving from an under-provisioned legacy infrastructure to AWS, New Relic application performance monitoring (APM) clearly shows a measurable improvement in user experience.

Baseline Your End-User Experience and Application Performance

Once you’ve established your goals, you must measure the baseline performance of your application. This is critical for two reasons:

  • During the migration, deviations from the established baseline can be early indicators of bugs or problems created during the migration.
  • After migration, you’ll need the pre-migration baseline and your established goals to judge the success of the migration and help determine when it can be considered complete.

What do you need to measure to create that baseline? The answer depends on your application and business needs, but typically covers applications and services, servers, and end-user experience – all measurable with New Relic:

  • For end users: Time to glass performance, error rates, types of errors, browser-specific and mobile-specific performance, and latency from multiple geographic locations all need to be considered. Moving from a specific data center to a cloud-based infrastructure may impact individual users’ performance in different ways based on their locations. Tools such as New Relic Browser and New Relic Mobile can help measure changes to the end user experience during a migration. Additionally, setting up synthetic user monitors using New Relic Synthetics can capture data about specific types of user interactions with the back-end services being migrated.
  • For each application/service: Response time, error rates, types of errors, latency, and call rates are all important to see how the services perform on their new infrastructure, and to detect code-level problems introduced by the new environment. New Relic’s Application Performance Monitoring (APM) is designed to help you measure this information so you can see how changes impact response times and errors in real time.
  • For each server: CPU, memory, load average during various usage scenarios (different traffic loads), and how they perform relative to different traffic, help you see how load is impacted with new server types and configurations. For these measurements, New Relic Infrastructure provides capabilities for measuring the performance of the servers running your application. This step is particularly important to monitor how effectively the new servers are being used by the migrated application.

User experience is, by far, the most important aspect to preserve during a migration. Understanding how individual services work internally is important, particularly for identifying and resolving issues, but the end-user experience is the ultimate result you want to preserve. You will not be successful if your end-user’s experience is noticeably worse after the migration. Tools like New Relic Synthetics are especially valuable in determining if, and by how much, the end-user experience has changed.

You may want to use New Relic Insights to create a migration-specific dashboard that shows the specific metrics you are monitoring for your acceptance criteria. All the New Relic products mentioned above can feed data into dashboards created in New Relic Insights, which can present a single-page view of how your migration is advancing. But remember that such a dashboard is designed to present a high-level overview of your application performance. Don’t forget to drill down to the more detailed performance information within the individual products to determine how your application is performing during the migration.

You may need to establish multiple baselines based on your application’s usage patterns. If it experiences usage peaks and valleys, you’ll want to establish baselines at multiple points and correlate to the specified usage patterns.

This may seem like an excessive amount of performance information to collect, but changing the performance of a given service or system can impact the performance of the entire application and uncover hidden defects and faults that appear only under different conditions. You aren’t just changing the application’s performance, you may also be shaking loose existing bugs and shortcomings in your applications. That means you need to understand how the change impacts the performance of all the components in your system and all your users, as well as monitor for newly surfaced bugs and defects.

For packaged applications that you do not directly control, you may not be able to get all the application/service-level performance and error information described above. For these applications, focus on end-user performance and server-level performance. Use tools such as New Relic Synthetics and New Relic Infrastructure to give you good guidance on the most critical aspects of how these pre-packaged applications perform in the cloud.

During the Migration

Once your baselines are established, you can begin the migration, applying the exact same monitoring tools to allow for apples-to-apples performance comparisons. Begin to add in cloud-specific monitoring capabilities (such as Amazon CloudWatch) as incremental improvements to your migration story and to assist in problem diagnosis.

As the migration progresses, use your New Relic dashboards to keep an eye out for spikes in any of your critical performance metrics. Look for jumps where performance suddenly becomes, say, 20% worse than before. If you see any of these performance changes, immediately examine and determine the cause before moving to the next phase of the migration. Spikes and jumps are good indications of migration-related bugs or problems. Pay particular attention to error rates and response time values, as these are often the first indications of a problem.

If you have not already done so, I recommend you assign individual services and capabilities—and the monitoring associated with those services and capabilities—to individual teams. It then becomes that team’s responsibility to make sure their specific services are properly functioning during the migration, and to resolve issues that occur. This may or may not be part of a formalized DevOps process, but certainly performing a cloud migration provides a perfect opportunity to begin instilling DevOps practices into your organization.

During the migration, it’s important to prioritize migration-related issues at a higher level than new business requests. If you let a migration-related issue persist, you may lose visibility into the root cause of the issue. The issue could fester and cause more serious problems later. It is best to resolve the issues when eyes are on the problem and appropriate resolutions can be implemented.

The discipline required to prioritize migration-related activities can be challenging, but it is essential to make sure the migration completes successfully.

Don’t be afraid to undo a migration. If performance becomes unacceptable, don’t be afraid to back out the migration of a given service or module and reevaluate what’s needed before continuing. Implementing your migration using incremental steps helps limit the “blast zone” so it can be easily rolled back. This may not always be possible, but being able to quickly roll back changes should be a goal at every step.

Post-Migration

After you’ve moved your last service to the cloud, you’ll want to validate whether you’ve met your expectations and completed the migration. To do so, repeat the pre-migration baselining measurement process, examining your application using the same usage pattern times as you did before the migration.

Finally, compare the new baselines to the pre-migration baselines. Some metrics will likely have improved while others may have degraded. Compare the results to your plan to validate that the changes meet your expectations. If they don’t, you have a couple of options:

  • Examine the deviations using your monitoring tools to determine the causes. If you’ve been doing this all along (as suggested above), you should already be aware of these issues.
  • Determine if the deviation is acceptable to your business. You may decide that in the context of the overall improvements, a negative deviation in a specific metric is acceptable.

Your migration should be considered successful only after:

  • All planned performance improvements have been validated in your new baselines.
  • All (negative) performance deviations fall within planned acceptable levels.
  • No unexpected migration-specific errors have been identified.
  • Any remaining deviations from your plan can be explained and are deemed acceptable to your business.

Figure 2, below, shows an example dashboard with both pre- and post-migration data. Notice that some performance measurements have changed—some for the better and some for the worse. But if all your acceptance criteria goals have been met, or if you understand the deviations and can safely and knowingly live with them, you can safely consider the migration a success.

Figure 2: New Relic services all use a common analytics platform that can sort, filter, analyze, and alert on the all the raw data you decide to collect. This dashboard compares apps running on an on-premises data center vs AWS infrastructures. Availability data comes from New Relic Synthetics, Error Rate and Response Times pull from New Relic APM, and CPU Utilization from New Relic Infrastructure.

What Can Go Wrong?

Even a simple lift-and-shift of a seemingly unchanged application to a new environment can introduce concerns and performance deviations than can torpedo a cloud migration. These issues can greatly extend the migration process or even require a rollback to pre-migration systems.

Figure 3: Most services rely on connections to other services in some way. New Relic measures the individual transactions between every service.

Here are some things to watch for:

  • Inter-service latency gets worse. (see Figure 3 above) Since you are moving services to a new environment, the expected latency between services is likely to change. Some values may improve, some may get worse. If a specific service-to-service latency gets significantly worse, it can trigger service availability problems and other errors.
  • Inter-service volatility. Even if inter-service latency isn’t significantly degraded, an increase in volatility can play havoc on your services and impact availability and error rates. It can also complicate future problem diagnoses. (see Figure 3 above)
  • Configuration changes. Even if you don’t plan to change your application, you’ll likely need to make some tweaks and adjustments to configuration values for the move to the cloud. These changes, even if you think they’re benign, can create hidden problems.
  • Server performance. The servers running your applications in the cloud may be faster or slower than your pre-migration on-premise servers—or their performance could be more variable than before. These server performance changes can reveal previously hidden defects in your application.

Even if your applications are stable and functioning normally in your existing environment, moving them to a new environment can introduce unknowns and uncertainty to your system. Detecting and resolving issues stemming from that uncertainty early in the migration process will make the migration smoother and more likely to be successful.

Moving Beyond The Migration

After the workloads have moved and deemed stable and performant, the expectations of how the cloud can help evolve your apps will rise. Similar to understanding the changes that occur during a migration, understanding the impact of the changes that occur is also critical to making the daily data-driven decisions on adopting new cloud services, code changes, and optimizing infrastructure. New Relic’s tools are designed to help you monitor your apps and infrastructure as you refactor and re-architect your apps on AWS.

Next Steps

In this blog, we’ve discussed the needs, measurement techniques, and risks that need to be mitigated during a migration to AWS. For more details on the migration and acceptance testing, download this cloud migration measurement guide: Measure Twice, Cut Once, or to get started, head over to newrelic.com/aws to sign up for an account and get started today.

Four Key Steps of Optimizing Your IT Resources

In past posts, we have written about The Evolution of Managed Services in Hyperscale Cloud Environments and discussed how AWS and our APN Partners are Raising the Bar in light of this progression. Let’s now hear directly from a few of our MSP Partners who have embraced this new ideology in managed services and who are enabling increasing levels of innovation for their customers.

This is the first in a weekly series of MSP APN Partner guest posts that we will be publishing here on the APN blog this summer. Our partners will be sharing their expertise on critical topics that enable customers to embrace the AWS Cloud and more fully realize the business values that this allows them. Topics will include cloud automation, optimizing after migration, hybrid cloud management, managed security, continuous compliance, DevOps transformation, and MSPs as innovation enablers.

Let first hear from Jeff Aden, Founder & EVP Marketing and Business Development at 2nd Watch (APN Premier Partner, MSP Partner, and holder of multiple AWS Competencies), as he discusses the importance of refactoring and optimizing workloads after migration to AWS and how a next gen AWS MSP Partner can deliver this value.

The Four Key Steps of Optimizing Your IT Resources, by Jeff Aden, Founder & EVP Marketing and Business Development at 2nd Watch

The 2nd Watch team has been very fortunate to have the opportunity to help some of the world’s leading brands as they strategize, plan, and migrate their workloads to the cloud as well as support them in ongoing management of those workloads. I have worked with some of the foremost cloud experts within 2nd Watch, AWS, and our customers, and can share best practices and Four Steps to Optimizing your IT Resources that we have learned over the years.

Optimization in the world of cloud requires data science, expertise in cloud products and, most of all, an understanding of the workloads that are supported by the infrastructure. We are not talking about a single application where anyone can provision a few instances. This is about large migrations and management of many thousands of workloads. Within that context, the  sheer volume of choices in cloud products and services can be overwhelming, but the opportunities for digital transformation and optimization can be huge.

Reaping maximum performance and financial optimization requires a combination of experience and automation. On average, we’ve been able to save our customers 42 percent more with our optimization services than if they managed their own resources.

We see optimization in cloud typically driven by:

1. Migration to cloud

Typically this is the first step to optimizing your IT resources and the most common first step for many enterprises. By simply moving to the cloud, you can instantly save money by paying only for what you use. The days of old—where you bought IT resources for future use versus what you needed today—are over. This brings enormous savings in three areas: The time it takes you (IT) and the business in planning for future needs; the amount of space you need to own or rent to hold your data center along with all the logistics of operating a data center; and the cost of IT hardware and software, since you are not buying today for what you may use by 2020. The initial move can save companies between 40 to 60 percent over their current IT spend.

All of this can be done by migrating to AWS without rewriting every application or making huge bets on large and expensive transformation projects. Customers like Yamaha have achieved savings of $500,000 annually, and DVF racked up 60 percent savings on hosted infrastructure compared to a legacy provider. And, one of the most recognizable use cases is Conde Nast, which increased performance while saving 40 percent on operational costs.

2. Performance

This is an ongoing optimization strategy in the cloud. Auto scaling is probably one of the most widely-leveraged strategies for tuning performance in the cloud. It provides the ability for IT to add resources based on internal or external demand, in real time and with automation. Once you have your workloads in the cloud, you can begin to understand what your resources are really using and how to leverage auto scaling to save money and meet demand with both enterprise applications and consumer facing applications.

However, in order to take advantage of auto scaling, you must first ensure that you have fine-tuned some of the basics to really increase performance. One way to do this is by ensuring that you have selected the correct instance type and type of storage. Another is to ensure your auto scaling is primed and ready to take the load. Customers can also  purchase additional IOPS or replace Amazon EBS volumes as needed.

3. Financial

Optimization is probably the most intricate and ever-changing aspect of cloud computing today. An introduction of a new product or version can benefit customers dramatically but can also cause some headaches along the way. Often, we see prospects start with the financial optimization even before they have looked at migration or performance and how it weighs into the comprehensive financial mechanism a company uses to achieve an overall savings.

We encourage clients to migrate their workloads and understand the performance needs prior to purchasing any Reserved Instances (for example) in order to maximize their resources and ROI. Clients should focus on getting some other foundational areas established first, like best practices for tagging organization-wide, visibility into the right reporting KPIs, and understanding the workload needs. Once this is achieved, we find that RIs can be a very effective way to save money and increase ROIs. Clients like Motorola, who took this approach have saved more and increased their ROI more than was possible with their legacy manage service provider.

2nd Watch was built on cloud-native tools and leverages automation that allows for scalability. We manage more than 200,000 Amazon EC2 instances today and understand the complexity beyond the numbers. Behind our solutions for optimization is a team of experienced data scientists and architects that can help you maximize the performance and returns of your cloud assets.

4. Evolving to optimal AWS services

This is the next step in maximizing performance and achieving greater savings. Some companies elect to start by refactoring. While this is the right starting point for some companies, we have found—over many years and through many customer examples—that this can be challenging if customers are also trying to familiarize themselves with running on the cloud. The 2nd Watch proven methodology aligns with where clients are positioned in their journey to the cloud. If you take our approach, the company’s employees and vendors become immersed in the cloud and see it as the “new normal.” Once acclimated, they can explore and understand new products and services to propel them to the next level.

Companies like Scor Velogica saved an additional 30 percent on hosting and support costs by evolving their application in an SOC2 cloud native environment. Celgene reduced the time to run HPC jobs from months to hours by taking this approach. Not only do we preach this approach, we practice it before putting it into effect with clients. We moved off of Microsoft SQL to Amazon Aurora, increasing performance and capacity without increasing costs or risks. Other clients take it in steps and move to a product like Amazon Relational Database Service (Amazon RDS) and experience savings on the operational side when they can clone a new database and save money, all without the administrative overhead of traditional IT.

A hyper-scale AWS Managed Service Provider (MSP) partner like 2nd Watch can provide tested, proven, and trusted solutions in a complex world. Gartner has named 2nd Watch a Leader in its Magic Quadrant for Public Cloud Infrastructure Managed Services Providers, Worldwide report for its ability to execute and completeness of vision. Access the report, compliments of 2nd Watch, for a full evaluation of all MSPs included. With the right level of experience, focus and expertise it is not so cloudy out there.

To get started or for help, contact www.2ndwatch.com.

 

Jeff Aden

Founder & EVP Marketing and Business Development

2nd Watch, Inc.

www.2ndwatch.com

AWS Microsoft Workloads Competency Now Includes Database Solutions and New APN Partners to Serve Customers Running Windows-based Workloads on AWS

APN is excited to announce updates to the AWS Microsoft Workloads Competency, as well as introduce the latest wave of talented APN Partners with Microsoft Workloads Competency status! This Competency has been updated to provide our customers with an easy way of finding skilled APN Partners to match their Windows workload migration goals, and to apply a stringent validation process to ensure expert delivery by qualified APN Consulting Partners.

Based on feedback from our customers and APN Partners, we have combined the SharePoint and Exchange competencies under a new “Productivity Solutions” category, while SQL Server will be represented within the category “Database Solutions”. This will allow customers to easily identify and engage the right solution APN Partners.

APN Partners with AWS Microsoft Workloads Competency status have demonstrated technical proficiency and proven customer success in the design, migration, deployment, and management of Microsoft-based applications on AWS, with specific focus on workloads based on Microsoft SQL Server, Microsoft SharePoint, and Microsoft Exchange Server. APN Partners with AWS Microsoft Workloads Competency status are qualified through a detailed assessment and third-party audit of the security, performance, and reliability of their solutions and must demonstrate a deep understanding of and adherence to AWS architectural best practices, overall and also relative to Microsoft applications and technologies. APN Partners who have attained the AWS Microsoft Workloads Competency status represent a uniquely qualified set of solution providers whom customers can look to for proven capability and maturity in their Microsoft solution/practice on AWS.

Today we are excited to announce that 10 new APN Partners have attained the AWS Microsoft Workloads Competency for Database Solutions, and in addition that two of these APN Partners have also achieved the AWS Microsoft Workloads Competency for Productivity Solutions.

Announcing the AWS Microsoft Workloads Competency for Database Solutions and the 10 APN Partners who have earned this status.

Congratulations to the initial APN Partners who have qualified for the AWS Microsoft Workloads Competency Database Solutions!

We are also pleased to announce that two APN Partners, Datapipe and InfoReliance, have also attained the AWS Microsoft Workloads Competency for Productivity Solutions. This area of the Competency recognizes APN Partners for their validated expertise and architectural best practices for Microsoft SharePoint or Exchange running on AWS. APN Partner solutions address areas including content management, search and workflows (SharePoint), optimized communications management (Exchange), design, access, and administrative control, high availability, compliance and security.

About the AWS Microsoft Workloads APN Competency

AWS first launched Windows Server instances in 2008 and now, nearly 9 years later, enterprise and public sector customers around the globe trust AWS to run their business-critical Microsoft-based Workloads. Today on AWS our customers can choose from a broad array of Windows-based offerings and applications to meet their needs include Amazon EC2 running Microsoft Windows Server (2003 R2, 2008, 2008 R2, 2012 and 2012 R2, 2016), several editions of Microsoft SQL Server on Amazon EC2 or Amazon Relational Database Service (Amazon RDS), tools for Windows environment and .NET development and management, and support for just about every other Microsoft application a customer wants to deploy by bringing their own licenses (BYOL) to AWS.

Are you a customer interested in leveraging an AWS Microsoft Workloads Competency Partner? Click here to view and connect with the APN Partners.

If you’re an APN Consulting Partner experienced in migrating Windows workloads to AWS, click here to learn more about becoming a Microsoft Workloads Competency. If you’re an APN Partner working on building your capabilities around Microsoft workloads, click here to see the APN Partner journey and tools available to you. To learn more about running Windows workloads on AWS, visit the Windows Server on AWS page.

To learn more about the benefits of joining the APN, visit our APN Partner Homepage.

Partner SA Roundup- June 2017

For this month’s Partner SA roundup, AWS Partner SAs Juan Villa, Pratap Ramamurthy, and David Potes discuss three APN Technology Partners: Cloud Conformity, Kinetica, and Vault. Let’s dive in!

 

Cloud Conformity, by Juan Villa

 

Have you ever wanted to have an always-available advisor who specializes in cloud operations? An advisor that can find opportunities to increase the security, reliability, performance, and operational excellence of your systems, while also helping you decrease costs? Cloud Conformity is a SaaS platform that does exactly this. They have built a continuous assurance tool that builds on the AWS Well-Architected Framework, as well as additional best practices, to provide you with detailed and accurate advice.

Getting started is easy. All you have to do is create an account on their website, and then follow the setup wizard. This wizard will instruct you to create a cross-account role in your AWS account to give Cloud Conformity read-only access to your resources. Cloud Conformity uses this information, along with over 250 rules, to drive the automated logic that powers the advisor and ultimately the recommendations they make to you.

You might be wondering how Cloud Conformity improves your security posture or provides you with recommendations to reduce cost. Let’s consider two examples. To see how they handle security, let’s say you’ve created a Linux-based EC2 instance and configured it with a security group that allows access to the SSH port from anywhere in the world. That’s probably the easiest way to get started, but it isn’t best practice from a security standpoint. Cloud Conformity’s engine constantly scans your account, taking into consideration all the configured rules. One of these is a rule to detect security groups that have broad open access to the Internet. When Cloud Conformity detects the insecure security group, it generates an alert to notify you, and even provides you with detailed steps on how to remediate this issue.

For our second example, Cloud Conformity can find opportunities for cost optimization in many areas.  For example, it analyzes the usage of EC2 instances by using Amazon CloudWatch metrics, such as CPU usage, reported by the instance. As the administrator of the Cloud Conformity account, you can configure the usage thresholds that define a mostly idle instance, and Cloud Conformity will help identify oversized instances. It will even help you find possible cost saving opportunities by identifying instances that would benefit from Reserved Instance allocations!

We’ve only scratched the surface of what Cloud Conformity can do. Cloud Conformity currently has over 250 rules in its engine, and this list is growing. They have detailed and thorough documentation and a very easy to use platform. I encourage you to check them out at https://www.cloudconformity.com and get started today!

 

Kinetica, by Pratap Ramamurthy and Juan Villa

 

Some applications generate rapid streams of information like Twitter feeds and equity trades. These streams can be stored and analyzed to derive insights at a later point in time. However, analyzing streams in real time and taking automated actions can provide much higher value to the business. Real-time analysis can be difficult though, due to factors such as the volume and size of the stream, scaling, and the dimensions of the data being analyzed; for example, in geospatial location analysis and advanced analysis like natural language processing (NLP).

There are now new ways of using graphics processing unit (GPU) technology. While GPUs were originally intended for video rendering and gaming applications, their power can be harnessed for other tasks as well. A single GPU instance, such as the AWS p2.16xlarge instance type, contains 16 NVIDIA Tesla K80 GPUs, each with 4992 NVIDIA CUDA cores and 24 GiB of on-board memory.

Kinetica, is a GPU-accelerated, in-memory database for powering analytics workloads. It is designed to take advantage of the parallel processing ability of GPUs to provide low-latency, high-performance analytics on large datasets at rest or streaming. It makes processing of complex real-time data faster and easier on AWS and you can feed a real-time stream into a Kinetica database, and run SQL queries on the stream in real time.  Kinetica integrates with several different data sources, as well as feeding various BI, GIS and other third-party applications for visualization and additional analysis.

With Kinetica, an orchestration layer with user-defined functions enables organizations to develop sophisticated data science models on the same database platform they’re using for business analytics. This means that companies can bypass the arduous step of first transforming the data and moving it back and forth between a database and a separate data science system.

Kinetica also provides a web-based visualization framework called Reveal that makes it easy and quick to explore geospatial data, as shown in the following screenshot.

This framework also integrates with the Kinetica geospatial pipeline for advanced mapping and interactive location-based analytics.  With Kinetica, machine learning, deep learning, natural language processing, and OLAP workloads can all now be performed from a single solution through C++, Java, Python, SQL, and your favorite point-and-click BI tool.

I encourage you to check out Kinetica’s product on the AWS Marketplace. You can also read additional information on the features and benefits of Kinetica on AWS by reading their most recent partner brief.

 

 

Vault, by David Potes

 

Managing secrets in a cloud infrastructure presents new opportunities and challenges for developers and administrators who are looking to improve security by securing, storing, and tightly controlling secrets.  With Vault, by Hashicorp, you have a powerful tool that tightly integrates with AWS to help administrators manage this infrastructure.

Vault has a notion of auth backends, one of which they have developed specifically for authenticating to your AWS resources. This backend treats AWS as a trusted third party, which means, in most cases, that no pre-provisioning of security-sensitive credentials (e.g., tokens, passwords, client certificates) is necessary. It handles leasing, key revocation, key rolling, and auditing. Users can access an encrypted key/value store and generate IAM and AWS STS credentials.

HashiCorp is an AWS standard partner and holds the DevOps competency, focusing on Configuration Management.

It’s easy to give Vault a try! Check out the interactive tutorial that HashiCorp provides, or use our Vault Quick Start to spin up an instance for yourself. If you’d like to learn more about how Fanatics, the online retailer for licensed sports apparel, uses Vault to secure their highly elastic AWS infrastructure, take a look at this webinar I presented with Seth Vargo and Paulo Machado from HashiCorp.

Supporting Customers as They Migrate Microsoft SQL Server Workloads to AWS

A guest post by Datavail, an AWS Partner Network (APN) Consulting Partner and an AWS Microsoft Workloads Competency Partner specializing in database solutions.  

Through the AWS Competency Program, we’ve been validated by AWS for our proven expertise in offering applications and database solutions built on the Microsoft SQL Server platform on AWS. This includes data management and warehousing, business analytics, structured and unstructured database integration and operation, and ensuring a high degree of security and regulatory compliance.

In this post, we would like to walk you through some of the benefits we have seen customers gain by running Microsoft SQL Server workloads on AWS. We will then provide a couple of real-life examples to illustrate our process for helping customers migrate SQL workloads to AWS.

Benefits of running Microsoft SQL Server on AWS

Amazon Relational Database Service (Amazon RDS) provides a fully managed database service that helps a customer instantly launch and configure a database server for immediate use. The following are just a couple of features that make Amazon RDS for Microsoft SQL Server very attractive:

  • Automated backups
  • Point-in-time recovery
  • Implement High Availability through a multi-availability zone (AZ) deployment.

The alternative to running Amazon RDS SQL Server instances is to run Microsoft SQL Server on an Amazon Elastic Cloud Compute Cloud (Amazon EC2) instance. This option can help you leverage the various instances and storage types offered by AWS. Additional benefits include:

  • Full control: Complete control over settings and configurations just like on-premises instances.
  • Elastic and on-demand: You can increase or decrease capacity in minutes, commission one of hundreds of instances automatically, and power on and off instances as you need to meet demand.
  • Reliability and security: Amazon EC2 Service Level Agreement (SLA) commitment is 99.95% availability for each AWS Region. Plus, AWS has data centers and network architectures to meet the requirements of the most security-sensitive organizations, which means that most, if not all, of your security compliance requirements will be met.
  • High availability plus performance plus disaster recovery: Yes, it can be done! With careful planning and architectural design, your Microsoft SQL Server instances can meet high availability, performance, and disaster recovery requirements at the same time. For example, you can configure a Microsoft SQL Server AlwaysOn Availability Group with one primary node for production workload, one secondary node for read-only routing to offload reporting requirements, and a third node for disaster recovery.

Datavail’s Process for Migrating Microsoft SQL Server to AWS

When clients seek assistance with migrating their complex environments to AWS, we initiate a detailed discussion to best understand the timeline, requirements, and steps involved in the migration process. We conduct an initial assessment, prepare a migration plan, and then meet with the client to review the details, challenges, and steps involved. We also help clients determine the optimal approach- Amazon RDS or Amazon EC2- to meet their SAQL Server needs.

Case study: Migrating Sony DADC NMS to AWS

Sony DADC NMS New Media Solutions (NMS) specializes in the delivery of digital media from the content provider to the end user (the content consumer). The company’s core mission is to deliver best-in-breed, secure, and innovative supply-chain solutions for their customers throughout the world. Users of Sony DADC NMS include many major motion picture studios, television broadcasters, radio broadcasters, online content portals, music labels, game companies, and software providers.

The company’s database environment is inherently complex. Its content is multi-source and multi-channel, and can consist of structured and unstructured data, including text, images, audio, and video. According to Sony, its content management system “provides a predictable, scalable, and flexible approach to more quickly and efficiently deliver media services to audiences,” and “significantly reduces security exposure by ensuring the lowest number of possible touchpoints of valuable assets.”

Sony DADC NMS Moves to Cloud as Ven.ue

In 2016, Sony DADC decided to make its content available in the cloud under the name “Ven.ue.” Datavail, which has DBAs with deep expertise in AWS Cloud technologies, was contracted by Sony to implement the Microsoft SQL Server-related tasks, including planning, testing, and executing.

A task list was prepared for Datavail with several milestones. The following steps were included in the migration process:

  • Installing and configuring Microsoft SQL Server 2014
  • Provisioning AlwaysOn Availability Groups
  • Migrating the database
  • Establishing a performance baseline
  • Conducting AlwaysOn Availability Group failover testing
  • Completing the final migration

Offloading the reporting workload from the production database proved to be one of Sony DADC’s biggest challenges. We addressed these challenges by using AlwaysOn Availability Groups and read-only routing. Sony DADC NMS and Datavail conducted weekly meetings and work sessions to discuss any open items, roadblocks, and the next action items for this project.

Our expertise with Microsoft SQL Server and AWS helped Sony DADC NMS accurately scope, plan, troubleshoot, and execute each task within this project. Additionally, as a result of this project, we can now support customers using Sony DADC NMS who want to migrate to AWS, and can also support AWS customers interested in the benefits of Sony DADC NMS as a content warehouse solution.

Case study: Teamwork at Mammoth Mountain Ski Area

Mammoth Mountain Ski Area turned to Datavail and AWS to meet their high availability needs for the SQL Server deployment and to offload reporting functions to AWS. Datavail replicated their databases from an on-premises server to AWS for reporting and high availability. The publisher and distributor reside on an on-premises Microsoft SQL Server instance, and the subscriber is located on SQL Server instances running on EC2 instances. To further increase redundancy and availability, there’s an additional SQL Server instance running on Amazon EC2 as a warm standby server for the subscriber instance. This unique setup enables high availability as well as off-loading the reporting needs to the subscriber running on an Amazon EC2 instance.

Off-loading reporting minimizes the contention and performance impacts on the production OLTP server. In addition, the SQL Server instance running on an Amazon EC2 instance serves as a warm standby server for disaster recovery.

Architecting and supporting the hybrid model of SQL Server instances running on-premises and on AWS is a team sport. The systems, network, and DBA teams must work very closely together to ensure a successful outcome. Communication and collaboration between the systems, application, and DBA teams are paramount. For example, service pack and hot fix rollouts and any maintenance activities require all teams to be on the same page on timing, so they can notify the business units in advance, coordinate on technical tasks and responsibilities, and schedule interval updates to the business stakeholders.

Case study: Moving a major education provider to AWS

This case study involves the migration of Microsoft SQL Server to AWS for an international education provider. The company works with high schools, colleges, and universities all over the world. The company’s rapid growth presented unique challenges such as over-provisioning Microsoft SQL Server instances, insufficient monitoring and alerts, and inadequate or non-existent disaster recovery and high availability solutions.

The education company had 12 data centers they wanted to consolidate. The IT team was spending too much time dealing with alerts and troubleshooting problems to enable them to focus on IT as a competitive asset, and had created an infrastructure to handle peak loads that was unused most of the time.

Checklist for Microsoft SQL migration to AWS

We assessed the company’s system and helped put the following migration plan into motion:

  • Provisioning EC2 instances
  • Provisioning Amazon RDS instances for Microsoft SQL Server, Oracle, MySQL, and PostgreSQL
  • Provisioning Auto Scaling and load balancing for Amazon EC2 instances
  • Implementing redundancy at every stage to reduce single points of failure
  • Implementing best practices in scalability and security
  • Configuring Amazon CloudWatch to help collect and track usage metrics and manage alarms
  • Configuring Amazon Route 53 to move traffic to nearest Availability Zone

The result was a significant consolidation and integration of their IT infrastructure. By using Amazon RDS, the onsite DBA team can focus more on business-critical aspects of their databases and less on day-to-day enhancements. Elastic Load Balancing allows the company to move away from expensive and complicated load balancers. The company can optimize its resources by right-sizing its instances when utilization rates fall.

Case study: A fresh approach to database administration

The following example involves a multinational producer and marketer of fresh fruit and vegetables. When your inventory can spoil in a matter of moments—whether it’s left too long in the field or not sold to retailers in time—every second matters.

The company was having problems with their data business. Their Microsoft SQL Server system performed poorly in part because they had no overnight support. They also lacked a testing environment for engineering a solution.

After the assessment, the Datavail team set up a test environment, migrated Microsoft SQL Server to Amazon RDS, and decommissioned the test environment when it was no longer needed. We integrated 24x7x365 monitoring and aligned processes with best practices.

Data was successfully migrated to AWS during scheduled maintenance windows. Alerts and tickets fell as database performance improved. Monitoring and managed services ensured that the company’s DBAs could sleep soundly at night and not have to react to alerts that were already addressed by Datavail DBAs when they arrive at work every day.

Conclusion: Moving SQL to AWS

If you are familiar with Microsoft SQL Server but new to AWS, Datavail’s experts can help assess, plan, and execute your move to the cloud. If you are familiar with AWS and want the benefits of running Microsoft SQL Server in that environment, our experts can help assess, install, test, and build out a system to handle your needs. We have experts on both services who can make the transition smooth and safe.

Datavail is a specialized IT services company focused on data management with solutions in BI/DW, analytics, database administration, custom application development, and enterprise applications. We provide both professional and managed services delivered via our global delivery model, focused on Microsoft, Oracle, and other leading technologies.

Powering IoT Innovation on AWS – Bsquare, Eseye, and ThingLogix Highlights, June 2017

Introduction by Terry Wise, Global Vice President, Channels & Alliances at AWS

The Internet of Things (IoT) is changing the way companies collect, store, analyze, and take action from data through connected devices. AWS has built IoT specific services, such as AWS IoT and AWS Greengrass to help you simply and securely connect your devices to the cloud and allow you to act on the data (either in the cloud, on the connected device, or both) and provide the ability to manage your devices so you can focus on developing applications that fit your needs. To accelerate your IoT design, development, and deployment, we work with a robust APN Technology and Consulting Partner ecosystem that provides services and solutions which complement AWS services and help customers further take advantage of AWS for IoT.

Today, I’d like to discuss three of our AWS IoT Competency Partners: Bsquare, Eseye, and ThingLogix.

Bsquare, an AWS IoT Competency Partner

AWS Powered Applications to Improve Industrial Asset Uptime

What specific business outcomes are you looking to achieve through your industrial operations? And how can you improve your existing business processes and outcomes by harnessing the power of IoT? Bsquare, an Advanced Technology Partner and AWS IoT Competency Partner, aims to help you move your business forward and take it to new heights through the company’s DataV™ applications. How? Simply put, DataV applications are meant to help you optimize business processes through data-driven insights made possible by making your physical assets intelligent and well connected. Does that sound up your alley? Let’s learn more.

“Bsquare’s DataV applications create business-focused IoT systems deployable in concert with AWS,” explains Dave Wagstaff, Bsquare Chief Technology Officer. “DataV helps businesses extract value from physical assets by using data to drive superior business outcomes. With it, businesses can connect remote devices, monitor data streams, automate corrective processes, predict adverse conditions before they occur, and, ultimately, optimize the performance of critical business assets and processes.” Bsquare has helped numerous industrial companies around the world, including Kenworth and Peterbilt (find a number of additional case studies here). DataV is optimized for deployment on the AWS Cloud and utilizes a broad range of AWS services that can include IoT Services (such as AWS IoT and Greengrass), Amazon EC2, Amazon S3, AWS Lambda, Amazon Machine Learning, Amazon Kinesis, Amazon Redshift, and additional services as customer solutions may require. “Utilizing AWS allows Bsquare to leverage the power, security and scalability of both core infrastructure as well as industry leading services groups such as IoT, Analytics and AI,” says Wagstaff. “By integrating DataV with AWS, Bsquare is able to focus on developing differentiated IoT applications and not worry about the heavy lifting associated with infrastructure and core services.”

Are you ready to give Bsquare a try? Reach out to the Bsquare team, and they can help you start with a Proof of Concept or Application Pilot (which is often done with your existing data) to quickly validate outcomes and ROI with a solution utilizing AWS services and DataV applications. While Bsquare also offers SI services, the company engages with a wide range of Technology and Consulting partners who are actively working with customers who can benefit from a DataV solution. “Because DataV typically fits within a larger, enterprise wide solution, Bsquare works closely with partners, including ISVs, SIs and Consultants, to deliver an integrated solution,” says Geoff Goetz, Bsquare Senior Director of Alliances.

Learn more about Bsquare by checking out a recent webinar, co-hosted with AWS, here. You can find industry-specific use case information here.

 

Eseye, an AWS IoT Competency Partner

Technology for Connectivity

Are you looking for a simplified way to securely connect and quickly scale your IoT device deployments globally? Eseye, with its goal to secure IoT from manufacturing to deployment and throughout the device lifecycle, is here to help. “Eseye’s day-by-day mission is to support our customers’ business growth by delivering managed global IoT device cellular connectivity: we aim to offer the most comprehensive and supported way businesses can deploy and scale cellular IoT today,” explains Damian McCabe, Eseye’s GM, Americas. So what is the Eseye solution, exactly? “The Eseye AnyNet Secure SIM is an enhanced and robust global roaming cellular connectivity solution, exclusively for IoT machines. AnyNet Secure SIM provides AWS IoT customers with enhanced security and connectivity features that enables IoT devices across the globe to remotely, automatically and securely activate, provision, authenticate and certify ‘things’, over-the-air, and to then ingest data into AWS customers’ clouds. Eseye leverages what is known as a ‘multi-IMSI’ SIM. In effect, it’s the same as having lots of SIMs from different Mobile Network Operators (MNOs) from all over the world, all on the one card and backed up with 24/7/365 specialist IoT support,” says McCabe. “We feel this makes life much easier for IoT customers who need scale quickly to feed business and sometimes life-critical data into their AWS Cloud deployments. Remember, customers will be managing anywhere from ten to ten thousand-plus fixed and moving IoT devices, spanning vast geographical regions and multiple MNO territories and we try to make that simple with a one SIM, one monthly bill solution.” Eseye works with over 800 customers across the globe, including eWATER (read the recent AWS case study on eWATER featuring Eseye here!).

Eseye began working with AWS in 2016, quickly becoming an Advanced Technology Partner and AWS IoT Competency Partner. The AnyNet Secure connectivity solution fully integrates with the AWS IoT Management Console, with all data securely ingesting into the user’s AWS cloud storage. “We were a trailblazer in IoT and are passionate about the business and societal benefits rapidly scalable, more secure cellular M2M services bring for our customers, and in turn their customers,” explains McCabe. “Amongst the main benefits, therefore, of integrating with AWS are that the right solution gets to a wider marketplace rapidly. The AWS team have also been formidable to work with. There has been a real meeting of minds and ambition throughout this project!”

Care to “connect” with Eseye? You can request a free, one month trial of the Eseye AnyNet Secure SIM here. Are you an APN Partner looking to help customers deploy and scale cellular IoT solutions today? Eseye brings specialist IoT expertise to each project in which the company is engaged, and work with a number of partners. “If it’s IoT and cellular, then we can help,” says McCabe. Learn more about becoming a partner here.

Learn more about Eseye here. And don’t miss your opportunity to learn more from Eseye CTO Ian Marsden about how Eseye built AnyNet Secure for AWS IoT on This Is My Architecture!

 

ThingLogix, an AWS IoT Competency Partner

Technology for Platform Providers

Are you hoping to take advantage of IoT to create new sources of value for your business and your end customers, but you’re not quite sure of the specific role IoT can play in driving business growth or even know where to begin? Enter ThingLogix, an Advanced APN Technology Partner and AWS IoT Competency Partner. ThingLogix is all about helping you accelerate your IoT adoption by helping you rapidly develop IoT solutions and supporting the full-lifecycle operation of these solutions.

“We help companies understand how IoT technologies can create new sources of growth and profit, and then we transform those technologies and business model concepts into differentiated, value-added IoT solutions,” says Carl Krupitzer, CEO of ThingLogix. Self-proclaimed cloud-first believers, the team at ThingLogix has built Foundry, a 100% cloud-based, full application stack for IoT. Foundry is a proprietary cloud platform, and Foundry Packages are component applications that can help customers across a wide range of industries and use cases create, deploy, manage, and evolve connected solutions. ThingLogix customers include Beacon Technical Systems, Sealed Air, Sennco, and Jarden Consumer Solutions (find more case studies here).

Architected using AWS best practices, Foundry and Foundry Packages integrate with a number of widely adopted enterprise solutions, including Salesforce, ServiceMax, and ThingWorx. ThingLogix chose to architect its solutions on AWS because of the enterprise-readiness of AWS infrastructure and services and the market leadership that AWS has demonstrated, both in terms of groundbreaking technology innovation and strong customer adoption. “We believe that companies generate much more value for themselves by focusing on the market-facing aspects of IoT solutions and IoT-enabled business models, rather than on the technical underpinnings of the solution and solution architecture. We saw a unique opportunity on AWS to help customers focus on the business value they can drive with IoT,” says Rob Rastovich, CTO of ThingLogix. ”To be attractive to end customers and sustainable in the market, an IoT solution must be enterprise-grade: high-performance, reliable, scalable, extensible, and secure. AWS offers all of these attributes, making it easier for our cloud platform to offer these attributes as well,” adds John Mack, CMO of ThingLogix. “Additionally, with AWS having become the industry standard for cloud infrastructure and services, our prospective customers are already using AWS is most cases.” In addition to utilizing core AWS services for IoT (AWS IoT, Greengrass, and the Enterprise IoT Button program), the company uses a number of AWS products to deliver functionality for market-facing IoT solutions, including AWS Lambda, API Gateway, Amazon DynamoDB, Amazon Kinesis (Analytics, Firehose, and Streams), Amazon Rekognition, Amazon Machine Learning, and Amazon Quicksight.

How can you get started with ThingLogix? It’s easy! ThingLogix Foundry is available on AWS Marketplace. And if you’re a Consulting Partner looking for ways to help your customers bring IoT solutions to market rapidly, ThingLogix can help. “When an AWS Consulting Partner uses ThingLogix technologies to develop and deliver market-facing IoT solutions for its clients, it creates the basis for a long-term professional services engagement, as an initial IoT solution can serve as a proof-of-concept for multiple subsequent solutions,” says Carl Krupitzer. If you’re interesting in working with ThingLogix, get in touch with the team here.

Learn more about ThingLogix here.

Want to learn more about the benefits of building IoT solutions on AWS? Visit our website. And find all of our AWS IoT Competency Partners here.

Qubole featured in AWS “This is My Architecture”

By Paul Sears. Paul is a Partner Solutions Architect (SA) at AWS. 

Qubole opened up the hood of their architecture of their Qubole Data Service (QDS) as Suresh Ramaswamy, Senior Director Engineering from Qubole discusses how they built their big data platform (QDS) on AWS. Suresh outlines how QDS manages the data processing infrastructure, automatically scaling the processing cluster as needed and how QDS can leverage AWS spot instances to help manage costs. You can learn more about Qubole’s architecture on AWS here: