HomeSOFTWARESAmazon Web Services For Dummies

Amazon Web Services For Dummies

Cloud Computing: Let’s Start From The Base

Cloud Computing is a technology that makes available, via the Internet, services based on the provision of hardware and software resources with an on-demand formula. The resources provided include, for example, computing power, storage space, data transmission tools, and essential application services. A company that needs this type of resources to deliver its online services can rely on a cloud service provider to purchase and configure a set of resources, with the guarantee of expanding this set over time based on business and business needs—system capacity. The characteristics that have made Cloud Computing such a successful technology are:

  1. cost containment
  2. system scalability
  3. security guarantee

Cost Containment

Mainly the need to manage a personal IT infrastructure is eliminated, from purchasing hardware and software to system maintenance, passing from the need to have specialized figures in the company. Furthermore, the concept of Cloud Computing is accompanied by cost flexibility based on the natural resources used (cost per consumption).

System Scalability

The possibility of extending or decreasing the available resources based on the capacity required to support the load, guaranteeing system stability; for example, to increase the resources available to increase the amount of data to be managed or to increase traffic to an online service.

Security Guarantee

A good service provider takes care of putting in place all the necessary practices to ensure compliance with security policies, protection of data stored in the cloud, and compliance with a series of certifications needed for the implementation of services that require adequate levels of security (data encryption, authentication, antivirus, disk encryption, etc …).

Types Of Cloud Computing

The main types of Cloud technology are classified as follows:

  1. DaaS (Data as a service): the provider provides and manages only the data storage
  2. SaaS (Software as a service): the provider delivers and operates a software service
  3. IaaS (Infrastructure as a service): the provider has and manages the infrastructure of hardware resources only
  4. PaaS (Platform as a service): the provider has and manages the hardware infrastructure accompanied by software resources that allow for an integrated solution for the delivery of applications

Amazon Web Services

The AWS cloud is a PaaS system distributed and available in 190 countries, divided into geographical areas (regions). Currently, 14 parts are active, but the birth of another four is expected in the next year. Each region includes several Availability Zones (AZ – Availability Zone): Availability Zones are, in effect, distributed data centers that allow applications to be replicated to ensure scalability and availability in a failure in a particular zone. The region to deliver your services and applications should depend on where most visitors come from.

Also Read: Laravel: The Advantages Of A Framework Vs. Plain PHP

AWS Components, Let’s Translate Some Acronyms

In the AWS system, almost every component or service is identified or recognized by an acronym. Let’s try to shed some light on the main ones.

S3 – Simple Storage Service

It is the first component born in Amazon: an accessible and shareable resource storage service. It is particularly suitable for storing static resources (storage, backup, media, and software components downloaded at boot to deliver EC2 machines). Here are the main features:

  1. The resources are stored in buckets, logical containers on which access permissions can be set.
  2. Each saved resource is associated with a key inside the bucket.
  3. Archived resources are available via HTTP, pointing to the URL of the service and the object key, and are also accessible via rest/soap API via HTTPS protocol.
  4. It has no storage limits.
  5. scalable, durable, and reliable service
  6. manages resource versioning and lifetime
  7. Calls to S3 are all encrypted with a key via a service.
  8. Resource access logging can be activated.
  9. Events can be configured (at resource get/put)
  10. You pay for the space occupied.

EC2 – Amazon Elastic Compute Cloud

This infrastructure allows you to have on-demand virtual machine instances that can be configured according to your needs. It is possible to resize, turn on and turn off the cases according to the need for computing power necessary to cope with the load: the first characteristic is, therefore, the scalability in terms of load capacity. To activate a machine, you can start from an image ( AMI – Amazon Machine Image ) or a previously saved template, a “machine model.” 

You then choose the operating system (Linux / Windows), the availability zone in which to turn on the machine, and the size of the device by selecting the family (families: T2, M3, M4, C3, C4, R3, G2, I2, D2), the machine architecture by configuring RAM, CPU, storage disks and network. Then we proceed with the instance’s configuration, specifying the shutdown rules, monitoring, access permissions, and other specifications. The self-configuration of the machine provides two essential tools:

  1. Metadata: all information associated with the machine. The instance’s status can be checked anytime by querying the Metadata.
  2. User Data: information to be sent to the instance so that scripts are run when the machine is booted.

Main features:

  1. high scalability
  2. Possibility to take advantage of the Auto-Scaling service: instances are automatically turned on based on logic defined by us; we can set metrics on resources (e.g., CPU) or on time slots based on the expected traffic
  3. root access to the machine
  4. possibility of defining an access key to the machine via ssh
  5. maximum control of the resource (in terms of hardware and software)
  6. ability to manage tags on resources to identify them within the cloud
  7. ability to configure security groups
  8. possibility of exploiting traffic balancing using Elastic Load Balancing
  9. Always use up-to-date intel Xeon hardware.
  10. You only pay for the resources used (from when I turn on the machine).

EBS – Elastic Block Storage

They are storage volumes (disks) mounted on EC2 instances. They reside in the same availability zone as the EC2 instance on which they are mounted and are particularly suitable as storage for data that requires high persistence (regardless of the state of the machine to which they are attached). They are therefore ideal for use as a primary file system or database storage or for storing and managing data at high read/write rates.

Several EBS volumes can be mounted on a single EC2 instance, but a single EBS volume can simultaneously serve a single EC2 model. For high input/output loads on a disk, you can choose the Provisioned IOPS SSD type to ensure more excellent I / O performance on the disk. Through the Amazon EBS Snapshot service, disk backups can be managed, and attention must be paid to the type of connected IPs to ensure the persistence of the reference IP addresses in the event of stop / start actions on the EC2 instance on which the volume is mounted.

VPC – Amazon Virtual Private Cloud

It is a logically isolated virtual private environment within AWS where EC2 instances reside. By default Amazon creates a VPC private network for the user account, assigning active AWS services; you can always configure the characteristics of the VPC network or create new ones. 

  1. Highly customizable: you can select IP address ranges, configure a hardware VPN with your corporate network and define access permissions to the VPC. It is also possible to create subnets, or subnets: a subnet is always linked to an availability zone. The VPC provides different levels of security with the possibility of configuring:
  2. Security-groups: linked to EC2 instances launched within the VPC. They work as if they were firewalls that regulate outbound / inbound traffic within the VPC. They operate at the single EC2 instance level and are stateful. ACLs filter traffic at the subnet level, control traffic based on access type (inbound/outbound) and are stateless.

Database And Cache

Amazon provides some database management services: relational / NoSQL database engines, caching systems, and cloud database migration services. Below is a list:

  1. Amazon DynamoDB: database no SQL fully managed; it is possible to calibrate the provision (size read/write) based on the forecast of the use
  2. Amazon RDS: relational database manager (supported DB engines: MySQL, MariaDB, Microsoft SQL Server, Postgres, Oracle). The database instances run under the VPC, a scalable and deployable service on different availability zones. It is possible to configure automatic backups by configuring scheduling and retention.
  3. Amazon Aurora: relational database compatible with MySql but (according to Amazon) exceptionally performing
  4. Amazon ElastiCache: caching service that implements the Redis and Memcached engines; scalable and distributable service.
  5. Amazon Cloud Watch: Amazon’s monitoring system; has a set of metrics (e.g., the CPU usage of a particular instance or group of cases) to monitor the AWS resources instantiated within your VPC. It is possible to link alarms to metrics to act, such as sending an email notification or connecting a new machine to the autoscaling group. Custom metrics can be created in addition to Amazon ones (custom metrics). The service provides graphical and statistical reports on the operational status of the monitored resources.

Amazon AWS Console

Amazon provides a web service and application to access the AWS resource management console. To start using AWS, you need to register for a free account and link a credit card; remember that, net of free services for the first year, the rest of the instantiated and used resources will be charged according to the chosen pricing policy. From the AWS console, you can configure your account. You can turn on / off / configure resources and access the statistics of the monitoring system. It is also possible to configure networking (VPC, ACL, Security groups) and all the rules relating to security, identity, and management of access to resources (IAM users, groups and roles, federated users).

Pricing Policies

There is AWS Free Tier, the free plan that can be used to get closer to the AWS world without incurring costs. Depending on the type of service, there are different pricing policies, which can be explored at this link:https://aws.amazon.com/it/pricing/services/.

Conclusions

It would be impossible in an article to explore the entire range of AWS products, so I refer to the official website, which explains in detail the characteristics of each service and offers tutorials to start using this technology. By registering for the service, you can access the AWS free 1-year plan, which allows free use (with some limitations) of most of the services.

Also Read: Functional Programming (PF) In Python

Techno Publishhttps://www.technopublish.com
Technopublish.com is a reliable online destination for tech news readers who want to keep themselves updated on current innovations and advancements on topics related to technology.
RELATED ARTICLES

RECENT ARTICLES