AWS Solutions Architect Public

AWS Solutions Architect

Priya Krishnamaraju
Course by Priya Krishnamaraju, updated more than 1 year ago Contributors

Description

AWS section 2

Module Information

No tags specified
1.The AWS Platform - Structure diagram- Refer AWSPlatformServices for exam 2.Global infrastructure - a)Region-Regional location- is the place where servers are physically placed. - may consist of multiple availability zones - eg 2 or 3. Ref Region.gif The multiple availability zones in a region mostly will be independant of each other. b)availability zones - a physical data center c) Edge locations - are CDN endpoints for cloud front CDN- Content delivery networks. A package of data. Say the data is in Texas and is requested for in NJ, it travels one time to NJ and is stored in cache in a edge location. For any further requests for the same data from NJ the data doesnt travel again. There are many more edge locations than regions-eg refer North America Region Edgelocations Map.gif   3.Services a) Networking & Content Delivery VPC- Virtual private cloud - (A virtual data center) Route 53 - Amazon's DNS- look up public IP address  53 DNS port named after I63 Cloud Front - bunch of edge locations Direct Connect - Physical direct telephone line between AWS and business network   b) Compute EC-2 - Virtual machine in the cloud EC 2 container service - highly scalable, powerful container management service(CMS) using docker containers - Used for clustering Elastic bean stalk- It lets you upload your code directly to aws, and the bean stalk goes over your code and takes care of provisioning all the infrastructure needed for it Lambda - Serverless - no access to virtual machines Light sail - will autodeploy. for those who dont know AWS   c) Storage - refer Storage.gif S3 - virtual disc in cloud for object based storage- where you can store object - Objects are files - eg media,excel etc But not a db- Simple storage service Glacier - archive files from S3 EFS - Elastic file service - File based storage and you can share it. You can install DB, applications and can be shared to multiple virtual machines. Storage gateway - to connect S3 to your own premises or head quarters- its a virtual machine of which you get an image and it connects with s3   d) Databases RDS - relational database services - It has got number of DB technologies- mysql, sql,poster sql,oracle,arora,maria db Dynamo DB - No sql DB, extremely scalable Redshift - Amazon's Data warehousing solution. uses BigData Elasticache- Caching in cloud. Frequently visited part of a website/application may be cached with Elasticache and thus removing the load off of DB   d) Migration Services Snowball - import export data to amazon. Its briefcase size application to which you can load terrabytes of data and it connects back to aws- which transfers it to S3. Snowball edge is the recent upgrade to this DMS - Database migration services - Transfer/Migrate DBs to cloud, redshift etc. The original DB can also be migrated to something else on AWS - like Oracle on ur premise to arora on AWS SMS - Server migration services. Migrates servers to cloud- especially useful to migrate the VMs in your premise to cloud. Can migrate 50 servers concurrently   e) Analytics Athena - Lets run sql queries on S3 objects(files). Essentially converts your csv/json files to searchable databases EMR- used for big data - like log analysis, financial reporting- at the base it uses hadoop Cloud Search- Used to create search engine for your website or application. Cloud and Elastic search are similar. Cloud search-fully managed search provided by amazon, elastic is open source Elastic Search Kinesis - Realtime analysis/streaming of large data- terra bytes in an hour. usage eg - financial transactions, social media feeds, sentiment analysis for a product Data pipeline - To move/migrate data . eg migrate data from s3 to dynamo db and vise versa Quick Sight - Display/Dashboard building based on your data in cloud. Business Analytic tool   f)Security  and Identity IAm - identity & Access management - Assigning access, users etc Inspector - agent you can install on your virtual machine and it'll give security reports of the processes running on the vm Certificate manager - Free ssl certificates to use on your domain names Directory service - Like Microsoft active directory WAF - Web application firewall - Application level protection. Traditionally firewall protects network level protection. this one provides application level prevents cross site scripting, sql queries etc Artifacts - way to access compliance documents   g)Management tools Cloud watch - monitors AWS environment, you can get cpu utilization, vm utilization, disc utilization Cloud formation - transform infrastructure to code. The commands etc to put together cloud templates to create/deploy the cloud Cloud trail - auditing your AWS resources. eg trail to track someone adding user etc Opsworks - automating deployments using chefs - wats shifts need to research Config - Configuration rules/alerts so the auditing or notification is set when the condition configured is met Service Catalog - lets you to configure the authorisation on certain services and not on certain services on each of your ec 2 service Trusted advisor - aws environment advisor, tips for performance customization etc   h)Applications Step functions - A way to visualize wats going on in the application- the micro services that are part of the bigger service SWF- Simple workflow - It facilitates the fulfillment of an order that includes human and automated tasks- Its used in Amazon fulfillment centers API Gateway- Doorway to access the backend services. Lets you create, manage maintain services which your app uses to access backend data. AppStream - Streaming desktop apps to users Elastic Transcoder - Used with videos, converts any video format to a type compatible to the receiving/client devices   i)Developer Tools Codecommit- Its github in the cloud . Lets u store your code open or closed CodeBuild- To build your code on cloud. Paid by per minute Code Deploy-Deploy your code to EC2 instance in an automated & very regulated fashion CodePipeline- Version control - track different versions of code like test,stage,uat etc   j) Mobile Services Mobile hub- add,create & design services for your mobile app- includes user authentication, data storage, push notifications etc AWS has a separate console for mobile apps which is part of Mobile hub The console has different parts listed below. Cognito - helps to integrate sign in/sign out to your apps , helps integrate with different socio identity providers Device Farm- Farm of different devices- provides an environment for you to test your android ios, fire os apps in a farm of different devices Mobile Analytics - cost effective and efficient way to analyse mobile data Pinpoint - engage with your app user/get data on user behaviour- its google analytics combined with targeted marketing   k)Business Productivity Workdocs - Store ur work docs in clouds tied with lot of security Workmail - exchange in AWS - for sending and receiving emails gateway   l) iOT- to keep track of millions of devices   m) Desktop & App Streaming Workspaces - like thin client with os installed in cloud workspaces AppStream 2.0- Stream desktop applications to your users   n)Artificial Intelligence Alexa - AWS voice service. When you talk to echo it talks to Alexa which connects with the services through lamda Lex- No need of echo to communicate with alexa. Lex can be embedded within any software Polly- text to voice service- that's there within alexa Machine Learning - Feed data set to AWS and the outputs of it ...This will let it predict the output for a similar future dataset Rekognition - feed a picture and it'll give you the objects in it like a bike, outdoors etc   o)Messaging SNS - Simple notification service - email, sms, publishing SQS - Queue system - decoupling a application from the message so the app even if its down it can pick the message from the q once it comes back at up           TODOs 1. aguru.com 2. Checkout AWS soltions architect exam blueprint, $150 for certification 3.Check AWS services on their website 4. angolia - related to cloud/elastic search 5. Learn more on ssl certificates and how they work 6. From iam section-3- research on power users 7. Read S3- FAQ 8. DHOBI - rtmp protocol 9. read S3 FAQ   To Review Elastic Map Reduce Storage Gateway Specs for RRS, IA & S3
Show less
No tags specified
No tags specified
No tags specified
Refer IAM features diagram   IAM - features refer to Iam Features snippet- provides centralized control to ur AWS account shared access to aws account granular permissions temporary access include identity federations - eg fb, google etc multifactor authentication password rotation policy integrates with other aws services pci dss compliance   Critical Terms a) Users - People using the AWS environment b) Groups - Group of people under one set of permissions/policy c) Roles- create roles and assign them to AWS resources eg assign a role to EC2 server instance to enable it to write to S3 d)Policies - Set of permissions. A policy can be applied to a user, group or roles   Lab - 1  1)Steps to get to IAM a) Select the closest region on top right b) go to Services on top left c) IAM is under Security, Identity & Compliance   2)IAM is global - Users/roles you create here are available globally - That is the region is global. 3)IAM users sign-in link https://priya-aws-2017.signin.aws.amazon.com/console originally has a number in place of priya-aws-2017. That number is called aws account number which can be accesses through account. Click on customize to put in your alias name. eg priya-aws-20174) 4. Refere LAB-summary pic Summary- a) MFA- multifactor authentication - did it through opting for a virtual device. The other option was hardware device. Download the authenticator in the other device- phone etc. I downloaded the android google authenticator app. opted for barcode scan instead of entering the code. Scanned the barcode on mfa in aws through my device and got a code. Entered it, waited for it to change to enter the next code and clicked on next. My AWS account was set up for MFA b)Created 2 users - assigned them to a created group called system-admins. assigned them the aws administrator access as opposed to system administrator. You can look at the json format of each of the policy document before assigning it to them. c)Applied IAM password policy - A password policy is a set of rules that define the type of password an IAM user can set. eg-Require at least one uppercase letter  d) Changed a user to a different group e)Created a role and assigned it AmazonS3FullAccess policy- EC2 with access to S3 f) generating credentials for the users created. the uname/pwd & access keyid/secretaccesskey is given only once and is available for download right then. uname/pwd is used to login to console the key id secret access key is used for programatic access.     Lab-2 - Create Billing Alarm - When billing goes above 10$ sound an alarm AWS- MyName on top right-billing or services-managementtools-cloudwatch & create billing alarm
Show less
No tags specified
S3 - 101 1. S3 - Simple Storage service 2. Object storage - files,videos,photos,media etc. 2 types of storage - block & object.  3. 0-5 terrabytes file size, unlimited storage - amazon checks the storage availability in each region & will provision SANs as needed 4. Files are stored in buckets - buckets in folder in amazon terms 5. Cloudberry - provides cool explorer type apps to access s3 6. S3 is a universal namespace- must be unique globally http://s3-eu(region)-west/amazonaws.com/acloudguru 7. When you upload a file to S3 ok code of 200 means it was successful 8. Data consistency model for s3 - a) Read after write consistency for PUTS of new objects b) Eventual consistency for Override PUTS & DELETES (takes time to propogate) 9. S3 is a Key-Value store- Lexicographic design - so alphabetical order of file name Key - name of file Value - data or sequence of bytes version id- metadata - data about data like the date etc Subresources- doesnot exist on its own. exists under a object  a) access control lists - users/groups that can have access to the object. This access can be defined to a file, object or a bucket also b) Torrent- S3 supports Torrent bit protocol 10. S3 specs a) built for 99.99% availability - SLA b)guaranteest 99.9-11*9's for durability of information stored - can support failure of 2 data centers concurently c)Tiered storage available - like 30 days old file put in this tier, the newer ones in the other tier etc d) Lifecycle management - tier management configuration settings e)versioning f)encryption g)secure your data using access control lists & bucket policies 11. S3 Storage Tiers/Classes  a) S3 - 99.99 availability & 11*9s 99.9 durability b) S3- IA Infrequently accessed - less priced than s3 but charged for retrievals c)Reduced Redundancy storage- much cheaper. but durability is 99.99 only d) Glacier - archival only, restoration is slow may take 3-5 hours. Cheapest   Refer S3 Tiered Storage specs  Refer S3 vs Glacier 12. S3 Charges - Charged for a)Storage b)Requests c)Storage management d) transfer pricing- data coming into S3 is free but transferring around costs e)Transfer acceleration   S3-BUCKET Lab 1. created a bucket Storage services - click on S3 2. Uploaded a file. 3. Added users to the bucket 4. Can set encryptions - Client side encryption, Server side encryption with amazon provided keys (SSE with S3), SS encryption with KMS(SSE with KMS), SS encryption with customer provided keys (SSE-c) 5. Security is through ACL - Access control lists & bucket policies 6. By default all buckets & objects in it are private.   For a bucket there are 3 tabs shown - Overview, Properties,Permissions & Management similarly for a file also.  Added tags to the object itself.- the tag added to the bucket doesnt get passed on S3 Versioning Lab 1. Once version is enabled it can only be suspended but not be deleted. 2. It writes/stores every change to a file as a separate object- even delete is stored 3. Provides additional level of security - MFA (multi factor authentication) to enable versioning and delete capability 4.    Cross Region Replication - Enabling CRR only copies the future changes from the source to the destination bucket - the existing contents must be copied with aws cli aws configure - will ask for access key code, access value, region. aws s3 ls - lists the bucket aws s3 cp --recursive sourcebucket name destibucket name - delete markers are replicated but deleting individual versions or delete markers are not replicated -cross region replication is at a high level - the two buckets must be in unique regions     Lifecycle management /rules -lab Glacier is not available in singapore & southamerica - so create your buckets for this lab in some other region AWS console->services->storage->s3->create bucket(no caps allowed in bucket name)- in properties tab enable versioning on selecting the bucket u get the bucket screen with overview,properties,permissions & management tab click on management tab to find the lifecycle tab on click of lifecycle tab it lets you create a lifecycle rule and the pop up dialog appears that takes you through the lifecycle rule setting process The rule can be set to a bucket or an individual file 1. current versions-  a) settings to transfer to infrequent access - IA - has to be a min of 30 days b) settings to transfer to glacier archival- has to be in ia for a min of 30 days - so 60 days form creation is minimum here 2. after file becomes previous version a) settings to transfer to ia-  b) settings to transfer to glacier archival - here no limit on days c) settings to expire - when this is set - only a delete marker is added against the current version at the expiry date. If delete has to happen then it has to be combined with the permanent delete  option Once you create the rule the summary screen gives you the transitions& expirations . Exam tips - lifecycle rule configs can be done in conjunction with versioning - can be applied on current versions / previous versions - transition rules - 30 days for IA & 60 days for glacier archiving -permanently delete     CDN Overview - Content delivery network - is a system of distributed networks that deliver webpages and other webcontents to a user based on their geographic location, the origin of the content & the content delivery server. edgelocation - location where content will be cached and is different from AWS region origin - the origin of all files which is distributed by the cdn. eg a website in london,europe  distribution - a cdn which consists of a cluster of edge locations. a) web distribution - for websites only b) RTMP - for media streaming- for dobi files with rtmp protocol Exam tips Edge location is not read only - write/put object is allowed which gets written to the object in the origin server TTL- time to live- objects in the cache are present untiil TTL expires You can clear cached object , but will be charged   Clount front distribution - Lab Services->network& content delivery create distribution Origin domain- prefilled with the bucket names Origin path - user friendly domain name otherwise its a collection of letters & number random Origin Description - it must be unique within a distribution an object may have multiple origins -cache behaviour settings read only for the distribution to the bucket/file access allow http methods- just read or put,options etc allow only logged in users - apply security to restrict only logged in/signed in users-restrict signed in urls use origin cache headers configure min,max ttls -distribution settings use all or specific edge locations - price class alternate domain name ssl certificate - default or client ssl certrification you can apply geo restriction also   Security & encryption Security -Bucket policies & access control lists -access control lists can drill down to specific objects in a bucket also -objects in a bucket are by default private -access logs - gives log of all requests/access done to your bucket. This can also be set up to other bucket or account   Encryption In transit - When object is transfer into or out of s3 SSL/TSL- https   In Rest Server Side encryption - SSE - SSE- with amazon provided keys - sse-s3 -SSE with aws key management service -SSE-KMS- they provide envelope/management of your encryption key- also provide order trail for the keys -SSE with customer provided keys - SSE-C Client side encryption   Storage Gateway - connects in-Premise IT environment with cloud storage to provide secure & scalable data transfer & storage to AWS cloud environment from in premise  Your data center ->asynchronously replicate -> AWS(S3 or glacier)   -SG(storage gateway) is available as a software for download in the form of a VM image. It supports VMware Elxi or microsoft hyper v. Once installed at your data center and connected to your aws account through the activation process- the software gateway can be set up using aws console with options that work for you   4 different types of gateways -File Gateway - uses NFS - Network file system - to store flat files to s3. All data is stored only in S3. Nothing onsite -Volume Gateway - For block storage - takes point in time snapshots and stores in s3 using Amazon - EBS - Elastic block storage Blocks/snapshotsa are stored in incrementals - so only the latest changes are stored a) Stored volume- data asynchronously backed up using iSCSI block storage. All data is stored in premise and backed up in s3 b) Cached volume - all data stored in S3- only the most frequently accessed data is cached on site Gateway virtual tape library -  Used for backup and uses popula backup applications but with virtual tape cartridge   Snowball Petabyte device -its a physical data transfer device - to effectively transfer large amounts of data into and out of AWS without the high network cost Snowball - 80Tbyte data, onboard storage capabilities SnowballEdge - 100TB - on board storage & computing capabilities - basically data processing in premise with lamda functions   1256 TB is 1 petabyte  & 1256 petabyte is 1 exabyte Snowmobile - exabyte scale data transfer- at a time you can transfer 100 PB with snow mobile   Snowball lab Its under migration in services - click create job to create a job for AWS to send you the snowball. keep pressing next to enter your address details etc the workflow block diagram is shown- job is midway when the snowball has been delivered to you - open flap right & left of the cuboid narrow side. - one side is the actual snowball kindle. the other end has the access to ethernet cable etc. the top has the power jack - which needs to be connected to the power cable. log on to aws and download the snowball client & install in your pc. get credentials & download the manifest in cli power up the kindle  /snowball -i internet ip -m the manifest name  -u credentials /snowball cp filename bucket link(can get from your create job request also) will be of the pattern s3//bucketname starts copying once done power off & create a job for aws to pick it up Transfer Acceleration - instead of directly uploading to a bucket it lets you upload to a edge location which then uploads to your bucket faster. this service comes at a additional cost. when this is enabled , a unique endpoint url is given which has s3-acclerated as its domain. through this link you'll write to the edge location instead of directly to the bucket.   Static website hosting http://pri-staticwebsite.s3-website-us-east-1.amazonaws.com/ create s3 bucket and in properties enable static website. give public access to read - only then the website will work. add index & error htmls to the static website properties. create the mentioned index & error files and upload to bucket. now go to static website and click on the endpoint url the website displays.   the url is always http://bucketname.s3-website-regionname.amazonaws.com
Show less
No tags specified
EC-2  -It provides resizable computing option in cloud. It reduced the time to obtain & reboot a server to minutes thus allowing scaling up or down your server capacity based on your computing requirements change. -Pay only for the capacity used.  -Provides tools to developers build failure resilient applications Types of EC2 By demand - no commitments. fixed rate by the hr(recently updated to rate by the second) a) applicable when you dont want any long term commitments but want low cost computing and want to use Amazon EC2 flexibility b) using it for deploying and testing your applications for the first time c)Applications with short term spiky, unpredictable, un inturrable processes Reserved - reserve capacity upfront by signing a contract for a 2 or 3 year term which gives good discounts on hourly/secondly rates a) application which have steady capacity, predictable number of users b) application with constant capacity requirements c) Reserved - Standard - 75% off on per hour charges -Convertible RIs- 54%- the configuration on the capacity reservation may be altered as long as the exchange is equivalent or greater than the reserved capacity. eg you want to change from windows to linux etc -Scheduled RIs - in this case the reserved capacity can be deployed for the time window requested for - these for special short term scenarios when this capacity usage can be scheduled- allows you to reserve for a predicable short term recurring schedule d) Spot- bid a rate for the capacity - will be useful if your application has flexible start and end timings - Application has flexible start & end times - Application needs very low rate computing capacity use case egs - insurance company requires processing of large amount of data- time is flexible -emergency need for additional computing power e)Dedicated hosts - dedicated physical EC2 server at your site. Useful to save on licencing costs. some usecases are -govt agencies  donot consider cloud ec2 secure and want the instance on their site -regulatory requirements which donot require multitenant virtualization -licencing requirements which donot need multitenant virutalization or cloud computing - may be got by on demand - or by purchasing on reservation where you get 70% off the demand price   Families of EC2 DR Mc Gift PX D2-Density- storage R4-Ram/memory M4-mainly used for general purpose apps C4- intensive compute/cpu G2- graphic intensive - video streaming/3d applications I2- IOPS - high speed storage-no sql db, data warehousing F1- Field programmable gate array- FPGA- hardware acceleration code based on app/code T2- cheap general purpose servers P2- extreme graphich/pics - machine learning, bit mining X-1 extreme memory - SAP hanna, spark   EBS- Elastic block storage- are used to create storage volumes that attach to EC2 instance. Once created they can be used to set up file system, run databases or in anyother way a block device can be used. EBS are stored in special availability zones which are then replicated to protect user from failure of a single system   Types General purpose SSD (GP2)- 3 IOPS per GB- IOPS(Input output per second)- upto 10,000 IOPS Provisioned IOPS SSD - I/O intesive applications -from 10,000 IOPS to 20,000 IOPS Magnetic disc drives types a)throughput optimized HDD(ST1) - sequentially written -cannot be used as boot volumes - used for big data, logging, data warehousing b) Cold HDD (SC1)  - infrequently accessed data  -file server etc c) Standard cheapest of bootable EBS Magnetic volumes are suitable for infrequently accessed applications where lowest storage cost is important     Revie & research instance store volumes
Show less
Show full summary Hide full summary