Unit 4-Moving an application to the
cloud
Moving
an application to the cloud means transferring your
app, data, and related services from on-premise servers (local servers)
to cloud platforms like Amazon
Web Services, Microsoft Azure, or Google Cloud Platform.It is the process of deploying
and running an application on cloud infrastructure instead of on-premise
servers for better scalability, flexibility, and cost efficiency.
Cloud Migration Strategies
(The “6 R’s”)
1. Rehosting
(Lift & Shift)
o
Move app as-is to cloud. Fast, minimal changes.
2. Replatforming
o
Make small optimizations (e.g., managed database) without full redesign.
3. Refactoring
/ Re-architecting
o
Redesign the app to fully leverage cloud features.
4. Repurchasing
o
Replace the app with a cloud-based SaaS alternative.
5. Retire
o
Decommission apps that are obsolete.
6. Retain
o
Keep some apps on-premises if moving is not feasible.
Cloud Migration Process
1. Assessment
o
Analyse apps, workloads, and dependencies.
2. Planning
o
Choose cloud provider and decide on migration strategy.
3. Preparation
o
Backup data, set security, and configure network.
4. Migration
o
Move apps, databases, and services to the cloud.
5. Testing
o
Verify performance, security, and functionality.
6. Optimization
& Monitoring
o
Tune resources, monitor performance, and reduce costs.
Functionality
mapping in the context of cloud applications or
migration refers to matching the features and capabilities of an
existing system to the new cloud environment.
1. Identify
core features – Determine what the application currently does.
2. Map to cloud
equivalents – Find cloud services that provide the same or better functionality.
3. Ensure
compatibility – Make sure workflows, integrations, and data processes still work.
4. Optimize
performance – Leverage cloud features (scaling, storage, security) without losing
original functionality.
Application Attributes:
1. Functionality – What the
app does.
2. Performance – Speed and
efficiency.
3. Scalability – Handle
more users/resources.
4. Availability – Uptime
and reliability.
5. Security – Data
protection and compliance.
6. Maintainability – Ease of
updates and fixes.
7. Portability – Can move
between environments.
8. Resource
Requirements – CPU, memory, storage needs.
Cloud Service Attributes are the key characteristics that define how a cloud service behaves and what benefits it provides.
·
On-demand – Resources available
automatically.
·
Network Access –
Accessible from anywhere.
·
Resource Pooling – Shared
resources for multiple users.
·
Elasticity – Scale up/down quickly.
·
Measured Service – Pay for
what you use.
·
High Availability – Reliable
uptime.
·
Security – Data protection and
compliance.
·
Manageability – Easy monitoring and control.
Cloud Bursting
is a hybrid cloud strategy where an application primarily runs on a private
cloud or on-premises data centre, Cloud bursting lets a
private cloud offload excess traffic to a public cloud to
handle spikes without over-provisioning.
Cloud
Bursting is a technique used in hybrid
cloud computing, where an application normally runs on a private
cloud or on-premises data centre, but when demand exceeds the
capacity of the private setup, it automatically uses public
cloud resources to handle the overflow.
Unit
5-Advance cloud concepts
• Advanced cloud concepts focus on optimizing performance, scalability, and efficiency through technologies like serverless computing, edge processing, and container orchestration (e.g. Kubernetes).
• Key areas include managing multi-cloud environments, advanced security (encryption, IAM), AI/ML integration, and automating data pipelines to minimize infrastructure management.
• Server less Computing (FaaS): Developers run code without managing servers, paying only for execution time (e.g., AWS Lambda, Azure Functions).
• Serverless computing is a cloud-native model where developers build and run applications without managing infrastructure, as the cloud provider handles all provisioning, scaling, and maintenance.
• It features an auto-scaling, event-driven architecture, where code executes only on- demand, allowing companies to pay only for the exact resources consumed
• No Infrastructure Management: Developers focus solely on writing code rather than managing virtual machines or servers.
• Pay-as-You-Go Billing: Charges are based on actual execution time (e.g., CPU seconds, number of requests) rather than pre-purchased capacity.
• Automatic Scaling: Applications automatically scale up or down based on traffic, from zero to thousands of requests
• Functions-as-a-Service (FaaS): The core component, allowing developers to deploy small, single-purpose functions.
• Serverless Architecture (FaaS) Function as a Service is a cloud computing model where you deploy individual functions that execute in response to events without managing servers.
No server management
Event-driven execution model
Automatic scaling
Examples:
AWS Lambda
Azure Functions
Google Cloud Functions.
Using containers for consistent application deployment and Kubernetes for automating scaling, deployment, and management. Processing data closer to the user to reduce latency, critical for real-time applications and IoT.Utilizing multiple cloud providers or combining private/public clouds for flexibility, disaster recovery, and avoiding vendor lock-in.
What is Containerization?
Containerization is the process
of packaging an application
and all its dependencies (libraries, runtime, system tools, configuration) into
a container so it can run consistently across different environments.
• Bundles an application's code, runtime, system tools, and
libraries into a single executable unit that runs consistently on any
infrastructure.
• Key Benefits: Portability,
high efficiency (sharing the same OS kernel), fast deployment, and security through
isolation.
• Use Case: Ideal for packaging microservices to ensure they run the same in development, testing, and
production.
e.g., Docker,Kubernetes
•
Custom Resource Definitions (CRDs)
•
Operators pattern
•
Horizontal Pod Autoscaling (HPA)
•
Cluster
Autoscaler
•
Multi-cluster federation
•
Network
policies
•
Pod security standards
What is Orchestration?
Orchestration is the automated management of containers, especially when you have many containers running across multiple servers.
•
Deployment
•
Scaling
•
Load balancing
•
Self-healing (restarting failed containers)
•
Rolling
updates
Big Data Analytics- refers to the
process of examining very large and complex data sets—commonly called big data—to uncover patterns,
correlations, trends, and insights that traditional data processing methods
cannot handle efficiently. It combines advanced technologies, statistical
models, and machine learning to turn raw data into actionable intelligence for
decision-making
Big data is defined by the Vs (sometimes expanded 3 to 5 Vs):
1. Volume – Massive amounts of data from sources like social
media, sensors, transactions, and logs.
2. Velocity – The speed at which data is generated and needs to
be processed.
3. Variety – Structured, semi-structured, and unstructured data
(e.g., text, video, images, IoT data).
4. Veracity – Accuracy and trustworthiness of data.
5. Value – The actionable insights that can be derived.
Benefits
·
Improved
decision-making through real-time insights.
·
Cost reduction by
optimizing operations.
·
Enhanced customer
experiences through personalization.
· Competitive advantage via market trend analysis.
Applications of Big Data Analytics
Big data analytics has wide-ranging applications across
industries:
- Healthcare:
Predicting disease outbreaks, personalized medicine.
- Finance:
Fraud detection, risk assessment.
- Retail:
Customer behaviour analysis, recommendation engines.
- Manufacturing:
Predictive maintenance, supply chain optimization.
- Government:
Smart city planning, crime analytics.
Integrating machine learning (ML) with big data analytics allows organizations to
move beyond descriptive analysis and diagnostic analysis to predictive and prescriptive insights. Essentially, ML models can analyse
massive datasets to identify patterns, make predictions, and even automate
decisions at scale.
Integrating machine learning (ML) with cloud computing
allows organizations to scale ML workflows, handle massive datasets, and deploy
models quickly without heavy on-premises infrastructure. Essentially, cloud
computing provides the computational power,
storage, and services needed to process big data and train complex ML
models efficiently.
Integrate
ML with Cloud Computing
· Scalability: Cloud platforms can scale up resources (CPU, GPU, TPU) for large ML
tasks.
· Cost Efficiency: Pay-as-you-go models reduce the need for expensive
local infrastructure.
· Accessibility: Teams can collaborate remotely with shared cloud environments.
· Rapid Deployment: Cloud services allow fast deployment of ML models as
APIs or web apps.
· Integration: Cloud platforms often provide built-in
ML tools, data storage, and analytics services.
Cloud ML Platforms
· AWS SageMaker – End-to-end ML platform with training, deployment, and monitoring.
·
Google Cloud AI Platform – Managed ML services, including AutoML and
TensorFlow integration.
·
Microsoft Azure ML – Drag-and-drop ML studio with scalable training.
·
IBM Watson Studio – Data science and ML development with AI governance
features.
Assignment Question
a) Explain Cloud Migration Strategies
b)
Explain cloud migration process
c)
Explain 6 R in Details
d)
What is functionality mapping?
e)
What are cloud services attributes?
f)
Explain cloud Bursting in details
g)
Explain FAAS in details
h)
Explain Containerization with example
i)
What is Orchestration?
j)
What are Big data Analytics & What are
5V’s in Big Data?
k)
Explain Machine Learning Integration.
0 comments:
Post a Comment