Amazon EC2 – Just the Facts

Amazon EC2 – Just the Facts

Amazon Elastic Compute Cloud (Amazon EC2) falls under managed services offered by AWS for their cloud computing offerings.  It is a secure and resizable compute capacity in the cloud.  Now let’s breakdown the meaning.  

    1.    Compute, (in Elastic Compute Cloud) refers to the compute or the server resources such as :
 •    Application server
 •    Web server
 •    Database server
 •    Game server
 •    Mail server
 •    Catalog server
 •    File server
 •    Computing server
 •    Proxy server

     2.    The Cloud (in Elastic Compute Cloud) refers to the fact that these are cloud hosted compute resources.
     3.    Finally, the Elastic (in Elastic Compute Cloud) refers to the fact if properly configured you can increase or decrease the number of servers required for an application automatically accordingly to current demands on that particular application. 

Instead of thinking them as servers, think of them as Amazon EC2 instances.  Instances allow you to pay as you go.  You only pay when you run instances and the time they were running. In addition, broad selection of hardware/software and selection of where to host your instances are all aspects of the EC2 instance.  Amazon offers a wide variety of instance types to fit your business needs.  They will differ by CPU, memory, storage, and networking capacity. 

The instance types are as follows:

 •    General purpose – These instances will provide a balance of compute, memory and networking resources and can be used on all types of workloads.  For example, a web server would be an ideal candidate because it used resources in equal proportions.  
 •    Compute optimized – These instances are ideal for compute bound applications that require high performance processors.  These compute intense applications such as gaming servers are well suited for this instance.
 •    Memory optimized – These instances are designed to deliver fast performance for workloads that process large data set in memory.
 •    Accelerated Computing– These instances use hardware accelerators to act as co-processors.  This will be more efficient than software running on CPUs.

 •    Storage Optimized – These instances are designed for workloads that require high, sequential read and write access to very large data sets on local storage.

Amazon EC2 instances have many features that will help you deploy, manage and scale your applications.  These features are as follows:
1.    Bare Metal instances
2.    Optimize Compute Performance and Cost with Amazon EC2 Fleet
3.    Pause and Resume Your Instances
4.    GPU Compute Instances
5.    GPU Graphics Instances
6.    High I/O Instances
7.    Dense HDD Storage Instances
8.    Optimized CPU Configurations
9.    Flexible Storage Options
10.    Paying for What You Use
11.    Multiple Locations
12.    Elastic IP Addresses
13.    Amazon EC2 Auto Scaling
14.    High Performance Computing (HPC) Clusters
15.    Enhanced Networking
16.    Elastic Fabric Adapter (Fast interconnect for HPC clusters)
17.    Available on AWS PrivateLink
18.    Amazon Time Sync Service

Understanding the meaning, the types of instance as well as the features are important when reviewing the Amazon EC2 facts.  There is one more item, we feel is important to tackle.  The building and configuring of the Amazon EC2 instance.   Building and configuring an instance is as easy as this checklist:
1.    Login to AWS Console
2.    Choose a Region (where to host instance)
3.    Launch EC2 Wizard
4.    Select AMI (software)
5.    Select instance type (hardware)
6.    Configure network
7.    Configure storage
8.    Configure key pairs

Amazon EC2 can help any organization that is looking to take at least some of their computing to the cloud.  Cloud Rush works with organizations in all steps of cloud migration.  From conventional data center infrastructure and operations to hybrid cloud infrastructure and application development to serverless computing and containerization, Cloud Rush is here to help.  We offer a complimentary consultation where we can dive deeper into Amazon EC2 and where it fits in your organization. 

Lets Talk!

Amazon Chime

Is Amazon Chime right for your organization?

Amazon Chime is a secure enterprise ready unified communication service designed for a frictionless adoption by users anywhere on any device. High quality audio and video make virtual meetings a pleasant reality. The easy to use meeting room features make sure meetings run smoothly and free of frustration. Amazon Chime is an AWS managed service so your IT department can be assured of easy deployment and stable operations with a simple integration of your current infrastructure.

Amazon Chime Login

The user interface provides a consistent meeting experience across the many devices and platforms. Both hosts and participants must accomplish basic tasks effortlessly and the intuitive focus and consistent Amazon Chine user interface assures frustration free participation.

With Amazon Chime, you can manage communication, meetings, and events. Chat rooms provide a persistent venue for ongoing group communication. And group chats provide for an ad hoc team interaction.  There are a variety of status symbols that will help you stay organized as well.  Joining a meeting is as simple as entering a 10-digit number.  In addition, Amazon Chime offers organization a one click event mode that provides all controls to the organizer.  

Amazon Chime Pricing

Amazon Chime is a pay only for what you use model. This allows you to pay for the features you use on the days you use them.   

Amazon Chime also offers user management, active directory integration, as well as the ability to use your own domain name with auto registration of users.  It is also secure.  It is built on the AWS Cloud and since it is an AWS service it means you can benefit from a data center and network architecture that meets requirements of the most security sensitive organizations.  In addition, all of the communication done through Amazon Chime is encrypted using AES 256-bit encryption.  

Whether you are hosting an online meeting, video conferencing, team collaboration or business calling Amazon Chime can help simplify it by providing options on how you want to communicate in a single secure application that allows you to pay for only what you use.  So, if you think Amazon Chime is an application that could help your organization communicate better, then let’s talk. 

As an Amazon Web Services partner , Cloud Rush helps businesses design, architect, build, migrate, and manage their workloads and applications on this powerful cloud platform. With more than 165 fully featured services, Cloud Rush can deliver AWS services of all sizes depending on the needs of your organization.  To begin, we offer all of our potential clients a complimentary consultation.  This is where we will dive into your organizations cloud service needs and provide you with a comprehensive cloud readiness plan. 

Lets Talk!
What is Amazon Kinesis?

Amazon Kinesis – A Quick Guide

Collecting, processing, and analyzing data to provide insights in real-time is critical to organizations.  Amazon Web Services offers Amazon Kinesis for this very purpose.  Amazon Kinesis allows your organization to easily collect, process, and analyze video and data streams.  This real time tool will allow you to consume data such as video, audio, application logs, website clickstreams, and loT telemetry data for machine learning, analytics, and other applications. Processing this data real time will allow your organization to respond instantly, giving your organization the upper hand.

The benefits of real time, fully managed, and scalable are seen across all capabilities of Amazon Kinesis.  These capabilities are the following:

1.    Kinesis Video Streams – Amazon Kinesis Video Streams is a fully managed AWS service that you can use to stream live video from devices to the AWS Cloud, or build applications for real-time video processing or batch-oriented video analytics.  Benefits of using Kinesis Video Streams include:

        a.    Connect and stream from millions of devices
        b.    Durably store, encrypt, and index data
        c.    Focus on managing applications instead of infrastructure
        d.    Build real-time and batch applications on data streams
        e.    Steam data more securely

2.    Kinesis Data Streams – You can use Amazon Kinesis Data Streams to collect and process large streams of data records in real time.  You can use Kinesis Data Streams for rapid and continuous data intake and aggregation. The type of data used can include IT infrastructure log data, application logs, social media, market data feeds, and web clickstream data.  In addition, here are some scenarios for using Kinesis Data streams:

        a.    Accelerated log and data feed intake and processing    
        b.    Real- time metrics and reporting    
        c.    Real- time data analytics
        d.    Complex stream processing

3.    Kinesis Data Firehose – Amazon Kinesis Data Firehose is the easiest way to reliably load streaming data into data lakes, data stores and analytics tools. It is a fully managed service that automatically scales to match the throughput of your data and requires no ongoing administration. It can also batch, compress, transform, and encrypt the data before loading it, minimizing the amount of storage used at the destination and increasing security.

4.    Kinesis Data Analytics – the easiest way to process streaming data in real time with standard SQL without having to learn new programming languages or processing frameworks.  It will automatically provision the services necessary to collect, process, analyze and visualize website clickstream data in real-time. This solution is designed to provide a framework for analyzing and visualizing metrics, allowing you to focus on adding new metrics rather than managing the underlying infrastructure.

If you would like to explore how Amazon Kinesis can help your organization, contact us for a complimentary consultation.  Cloud Rush’s hands on, human approach to IT will help your organization with all of your Amazon Web Service needs.

Lets Talk!
Amazon EMR Migration Guide - Part 2

Amazon EMR Migration Guide | Part 2

From the response we received from Amazon EMRA Migration Plan we have decided to elaborate on the Amazon EMR topic.  This post will dive deeper into next steps.  So now that you have started your journey to Amazon EMR, gathering requirements, optimization and security are the next steps in the migrating process.

A list of metrics is useful to help with cost estimation, architecture planning, and instance type selection. These will help drive the decision-making process during migration. For example, you will need to capture each of these metrics to drive the decision-making process during migration:

*     Aggregate number of physical CPUs
*     CPU clock speed and core counts
*     Aggregate memory size
*     Amount of HDFS storage (without replication)
*     Aggregate maximum network throughput
*     At least one week of utilization graphs for the resources used above 

Now we will cover optimization from the cost, storage and computing aspects. With Amazon EMR, you only pay a per-second rate for every second that you use the cluster. Amazon EMR provides various features to help lower costs. To make the best use out of those features, consider the workload type as well as the instance type.  This will help to optimize costs.  In addition to cost optimization, storage optimization is equally important.  By optimizing your storage, you can improve the performance of your jobs. This approach enables you to use less hardware and run clusters for a shorter period. Here are some strategies to help you optimize your cluster storage:

*     Partition Data
*     Optimize File Size
*     Compress the Dataset
*     Optimize File Formats

While cost and storage optimization is important, it is imperative to understand the computing optimization as well.  Here are some of the features and ways to optimize your Amazon EC2 cluster’s compute:

*     Spot Instances
*     Reserved Instances
*     Instance Fleets
*     Amazon EMR Auto Scaling

There are a number of factors to consider when estimating costs for an Amazon EMR cluster. These factors include EC2 instances (compute layer), EBS volumes, and Amazon S3 storage. Due to the per-second pricing of Amazon EMR, the cost of running a large EMR cluster that runs for a short duration would be similar to the cost of running a small cluster for a longer duration.

Once optimization is fully detailed, securing your resources on Amazon EMR is the next step.  Amazon EMR has a comprehensive range of tools and methods to secure your data processing in the AWS Cloud. Some best practices are:

*     Design early with security in mind
*     Ensure that the supporting department is involved early in security architecture.
*     Understand the risks
*     Obtain security exceptions.
*     Use different security setups for different use cases

Once you have hammered out the next steps of the migration process which are gathering requirements, optimization and security, you will be on your way to fully taking advantage of Amazon EMR.  Talking with a cloud service company that is dedicated to helping organizations navigate platforms such as Amazon EMR is critical to the success of your project.  Contact Cloud Rush today for a complimentary assessment for your organization.

Lets Talk!
Amazon EMR Migration Approaches

Amazon EMR – A Migration Plan

Amazon Web Services (AWS) offers their Amazon Elastic MapReduce (EMR) tool for big data processing and analysis.  The MapReduce software frame allows vast amounts of data to be processed quickly and cost- effectively.  In addition, EMR securely and reliably handles a broad set of big data use cases, including log analysis, web indexing, data transformations (ETL), machine learning, financial analysis, scientific simulation, and bioinformatics.  This is accomplished by using open source tools such as Apache Spark, Apache Hive, Apache HBase, Apache Flink, and Presto, coupled with the dynamic scalability of Amazon EC2 and scalable stores of Amazon S3.  Whether you are running a single purpose, short lived cluster or a long running highly available cluster, Amazon EMR is a tool that will provide your organization the flexibility you have been looking for.  Let’s explore further the benefits that Amazon EMR will provide to your business.

Getting Started - Amazon EMR Migration Approaches

When starting your organization’s journey to migrate your big data platform to the cloud, you must first decide how to approach migration. There are 3 approaches

1. Re-architect your platform to maximize the benefits of the cloud. This approach requires research, planning, experimentation, education, implementation, and deployment. These efforts cost resources and time but generally provide the greatest rate of return as reduced hardware and storage costs, operational maintenance, and most flexibility to meet future business needs.

2. Lift and shift approach takes your existing architecture and completes a straight migration to the cloud. The lift and shift approach is the ideal way of moving workloads from on-premises to the cloud when time is critical and ambiguity is high. In addition, there is less risk and shorter time to market.

3. Hybrid approach is where you blend a lift and shift with re-architecture approach.  This hybrid approach includes the benefit of being able to experiment and gain experience with cloud technologies and paradigms before moving to the cloud.

Although there are pros and cons to each, it is imperative to agree on the migration approach your organization is taking before you move to the next step, prototyping.

Amazon EMR Prototyping

When moving to a new and unfamiliar product or service, there is always a period of learning. Usually, the best way to learn is to prototype and learn from doing, rather than researching alone, to help identify the unknowns early in the process so you can plan for them later. Make prototyping mandatory to challenge assumptions. Common assumptions when working with new products and services include the following:

1. A particular data format is the best data format for my use case.
2. A particular application is more performant than another application for processing a specific workflow.
3. A particular instance type is the most cost-effective way to run a specific workflow.
4. A particular application running on-premises should work identically on cloud.

There are best practices for prototyping and a AWS partner can help you through these to ensure all assumptions are validated to a high degree of certainty.

Choosing a Team

When starting a migration to the cloud, you must carefully choose your project team to research, design, implement, and maintain the new cloud system. We recommend that your team has individuals in the following roles with the understanding that a person can play multiple roles:

1. Project Leader
2. Big data application engineer
3. Infrastructure engineer
4. Security engineer
5. Group of engineers

Getting started with your migration plan will consist of determining your migration approach, prototyping and choosing your team.  Once these critical items are identified your organization will be able to move to the next steps of the migration plan.  These include gathering requirements, cost estimation, migrating the data and ongoing support.

Cloud Rush is a certified AWS partner.  They specialize in cloud assessments, strategy and planning, cloud migration, managed cloud services, as well as disaster recovery.  Our “service that never sleeps” approach take a hands-on human approach to IT.  Let Cloud Rush work with you to start your Amazon EMR migration journey together.

Lets Talk!
Amazon Elastic MapReduce (EMR)

Amazon Web Services – Amazon EMR

Amazon Web Services (AWS) offers their Amazon Elastic MapReduce (EMR) tool for big data processing and analysis.  The MapReduce software frame allows vast amounts of data to be processed quickly and cost- effectively.  In addition, EMR securely and reliably handles a broad set of big data use cases, including log analysis, web indexing, data transformations (ETL), machine learning, financial analysis, scientific simulation, and bioinformatics.  This is accomplished by using open source tools such as Apache Spark, Apache Hive, Apache HBase, Apache Flink, and Presto, coupled with the dynamic scalability of Amazon EC2 and scalable stores of Amazon S3.  Whether you are running a single purpose, short lived cluster or a long running highly available cluster, Amazon EMR is a tool that will provide your organization the flexibility you have been looking for.  Let’s explore further the benefits that Amazon EMR will provide to your business.

Amazon Web Services Amazon EMR Benefits

There are many benefits you will reap when you make use of AWS’s Amazon EMR.  Here are the top 5 benefits to using Amazon EMR:

1. Ease of Use – Everybody wants easy and that is what Amazon EMR will provide.  EMR will launch clusters in minutes.  There is will be no need to worry about node provisioning, infrastructure setup, Hadoop configuration, or cluster tuning.  Amazon EMR takes care of these tasks so your team will be able to focus on the analysis. This will allow your teams to collaborate and interactively explore, process and visualize the data in an easy to use format.

2. Low Cost – The cost of Amazon EMR is a low-cost solution.  It is will be a predictable charge.  Amazon EMR can be billed at a per-second rate with a one-minute minimum charge.  For example, you can launch a 10-node EMR cluster with applications such as Apache Spark, and Apache Hive, for as little as $0.15 per hour.

3. Reliable – Amazon EMR will provide the reliability your team will need.  EMR will allow your team to spend less time tuning and monitoring your cluster. EMR is tuned for the cloud, and constantly monitors your cluster — retrying failed tasks and automatically replacing poorly performing instances. EMR provides the latest stable open source software releases, so you don’t have to manage updates and bug fixes, leading to fewer issues and less effort to maintain the environment. With multiple master nodes, clusters are highly available and automatically failover in the event of a node failure.

4. Security – Amazon EMR security is the highest priority.  Security is a shared responsibility that is shared between AWS and your organization.  A security plan will be put into place to ensure your data is secure. 

5. Flexible – You have complete control over your cluster. You have root access to every instance, you can easily install additional applications, and customize every cluster with bootstrap actions. You can also launch EMR clusters with custom Amazon Linux AMIs and reconfigure running clusters live without the need to re-launch the cluster.

AWS’ Amazon EMR software for big data processing and analysis is a must for your AWS strategy.  The framework will allow your developers to create programs that process immense about of data.   As well as provide them ease of use, low cost, reliable, secure and flexible benefits. Let’s talk about how Amazon Web Services Amazon EMR can work with your organization.

Cloud Rush specializes in cloud assessments, strategy and planning, cloud migration, managed cloud services, as well as disaster recovery. Our “service that never sleeps” approach takes a hands-on, human approach to IT. Partnering with best in class solutions, Cloud Rush wants to be your partner with your cloud long and short-term

Amazon Athena

Is Amazon Athena right for you?

Amazon Web Solutions (AWS) offers Amazon (AWS) Athena as a service.  Amazon Athena is a cost-effective interactive query service that will make your life easier and save you time and frustration.  This easy to use, server less service will allow you to quickly query your data without having to setup and manage any servers or data warehouses.  Amazon has made it as easy as, point and click.  It allows you to tap into all of your data without the need to setup complex processes to transform and load the data, so there is not ETL.  With that said, let’s explore Athena.


Amazon Athena allows you to control your cost. This program allows you to pay per query. You can save 30%-90% on your per query cost and get better performance by compressing and partitioning and converting your data into colander formats. Athena queries the data directly in Amazon Simple Storage Services (S3), so there are no additional charges beyond Amazon S3.


Amazon Athena is flexible, powerful and scalable.  Athena uses presto and works with a variety of formats. This is ideal for quick havoc querying but can handle more complex queries as well. In addition, Athena uses Amazon S3 as its underlying data storage making your data highly available and scalable.

Query Time

With Amazon Athena, you don’t have to worry about not having enough computing resources to get fast interactive query performances. It automatically executes queries in parallel, so most results come back in seconds. Depending on the type of query, it can even be faster if you store the data in a colander format. 
Now that you understand the benefits, we wanted to demonstrate how easy it is to use this service. There are only 5 basic steps when you are using Athena.

How to Use Amazon Athena

1.    Create an S3 bucket and object

2.    Create a metadata database

3.    Create a schema

4.    Run the Query

5.    Access the History

As you can see now, Amazon Athena is cost effective, flexible and easy to use.  This service will save you time and money.  The next step is to contact us!  We can setup a complimentary consultation to review your Amazon Web Service needs. 

Public Cloud Governance

Public Cloud Governance

With all the economies of scale afforded through cloud adoption, it is essential to understand that only through public cloud governance are costs managed, data and infrastructure secured, and realize the competitive benefits of cloud providers such as Amazon Web Services (AWS), Google Cloud Platform (GCP) and Azure. For most organizations, cloud adoption spans business units, is siloed, skill levels vary and generally, results in “black-boxed” in conversations. Public cloud governance is not something that can be overlooked or dismissed, without having an impactful result on the business. You moved to the cloud in part, to reduce your capital expenses, but you could also have operational expenses accruing that are not aligned with the forecast. Cloud adoption does not have to be a zero-sum game, you can actually realize all of the benefits that the cloud has to offer without breaking the bank and losing track of your data. Public cloud governance is a discipline that the technical as well as the business savvy can gain control of and have a finger on the pulse of your cloud footprint at all times. Governance is not just for the Enterprise; it is incumbent on any company leveraging the cloud to employ some level of governance, or you will suffer setbacks in areas that were not anticipated.

What is Public Cloud Governance?

At Cloud Rush, we view Public Cloud Governance as having 4 pillars;
  • Resource management To govern the cloud, you have to know what is deployed at any point in time.
  • Proactive cost management It’s not enough to look at your bill. The cloud changes rapidly, and manually keeping up with the pricing matrices can be a tall order. As a result, public cloud governance will provide cost savings and aggregated recommendations.
  • Policy compliance Compliance can be summarized as merely a set of rules. These rules are codified in a way that provide uniform governance that is both proactive and reactive.
  • Access and data security Public cloud governance must also monitor usage patterns for compliance and security purposes, but also must account for and categorize data you have in cloud. At the end of the day, compliance officers want on-demand compliance reporting.

How do we govern the cloud?

Fortunately, cloud governance is achievable for companies of any size. In order to govern your clouds, you must aggregate all of your machine data for analysis in real time, or near real time. Splunk defines machine data as, “one of the most underused and undervalued assets of any organization. But some of the most important insights that you can gain—across IT and the business—are hidden in this data: where things went wrong, how to optimize the customer experience, the fingerprints of fraud. All of these insights can be found in the machine data that’s generated by the normal operations of your organization.” Because of a wide array of SaaS solutions in the marketplace, companies are now able to define a monitoring stack that brings all of the machine data together to provide real insights, sophisticated compliance monitoring and track your costs. Note however, that there is NOT a single, silver bullet present day; your monitoring stack will generally be comprised of 2-4 vendors, depending on your organization’s needs. As you might guess, many of these platforms will have overlap between each other, but they all have their own unique features that fill various voids.

What does a typical monitoring stack look like?

  • Resource Management When it comes to resource management, config management (CMDB) there are a few options; – Cloudaware (Cloud Rush recommended) – Scalr – CloudCheckr – CloudHealth
  • Cost Management Many platforms offer core cost management and have recommendation engines designed to maximize your dollars spent. Some of our favorites are; – CloudHealth (Cloud Rush recommended) – Cloudaware – Cloudability
  • Compliance Organizations have varying levels of compliance needs. Make sure you understand your organization’s compliance and reporting needs. This will help inform vendor selection. – Divvy Cloud (Cloud Rush recommended) – Cloudaware
  • Log Aggregation Everything deployed in the cloud emits data. As a result, these logs must be aggregated for analysis, alerting, reporting and dash-boarding. This data provides operational insights that illuminates your infrastructure as if it were sitting in your on-prem data center. – Splunk Cloud – Scalyr – Sumo Logic – ELK stack (“roll your own” platform)
  • Conclusion

    In conclusion, we discussed how important public cloud governance is, where it fits into the organization and briefly introduced you to vendors in this space. In this five (5) part series, we’ll be taking a deep dive into the discipline, and along the way, you’ll broaden your knowledge around how we harness all that we do in the cloud.

    About the Author

    Chris Scragg is a principal cloud architect for Cloud Rush, with years of industry experience related to public cloud governance. Chris’ cloud journey began with a pivot to Amazon Web Services, out of legacy data center environments, back in 2011. A serial entrepreneur, Chris continues to maintain a deep focus in AWS, GCP and Azure, with an eye toward helping clients increase their competitiveness through digital transformations.