Asynchronous workflows powered by AWS-Lambda

aditya goel
10 min readJan 24, 2022

In case you are landing here directly, it’s recommended to read through this documentation first.

In this particular blog, we would demonstrate following aspects :-

  • Full fledged Asynchronous Lambda triggered by event at S3.
  • Lambda pushing an event to SNS Topic.
  • Another Lambda triggered by event at SNS Topic.
  • Pushing Event to DLQ in Exception situation.
  • ErrorHandler Lambda triggered by event on DLQ.
  • Creation of resources using CloudFormation “template.yaml”
  • Introduction to Lambda Scaling.
  • Destroying Lambda Stack.
  • Generating Docker Image from Docker container.
  • Pushing Docker Image to DockerHub.

Step #1) :- First, let’s create a Lambda Function from the available template :-

sam init

Step #2) :- Now, copy the entire directory from our Docker container to our Local Host. Note that, we shall be doing the dev onto our machine :-

docker cp 028cce033ce7:/PROJECTS-LAMBDAS/patientcheckout .

Step #3) :- Now, rename the directory of function to “patientcheckout”.

Step 4) :- Let’s start writing our “template.yaml” file, with the help of which, “sam deploy” step creates the Infrastructure resources :-

  • We have created an environment variable with name as “PATIENT_CHECKOUT_TOPIC” and we use the intrinsic function called !Ref.
  • Purpose of using this Intrinsic-Function is that : This resource(Topic) gets created dynamically at runtime through this CFN template itself. We need access to that in our codebase. So, we use this intrinsic function, assign that topic reference to this: PATIENT_CHECKOUT_TOPIC.

Step #5) :- Now, create two resources using the CloudFormation template :-

  • S3 Bucket → Here, we have used the !Sub (Intrinsic Function) in order to dynamically create the
  • SNS Topic → Here, we have also created a SNS topic.

Step #6) :- Now, let’s write our Lambda Function (“PatientCheckoutFunction”) → Whenever objects(i.e. whenever any particular file is uploaded to the S3 Bucket), this particular Lambda shall be triggered.

  • Events Section → We use the afore-created S3 bucket in the event section of our lambda because the S3 bucket is what is triggering the lambda function. The name of this event is “S3Event”.
  • Type → The type of this particular event is S3. Under its properties, you do find a bucket and even that needs to be handled instead of Path. Here, We use the !Ref intrinsic function along with the bucket name, which we had created above.
  • S3ReadPolicy → Our LAMBDA function should be able to read the data from S3 Bucket and that’s where we have defined the S3ReadPolicy to our bucket. So, now our LAMBDA function will have the required policy permissions to read from this S3 Bucke.
  • SNSPublishMessagePolicy → Our Lambda function(“PatientCheckoutFunction”) would also need access to this SNS topic.

Conclusion :-

  • When the file is uploaded to the S3 bucket, it will push an event which will trigger this lambda function “PatientCheckoutFunction” implicitly. And then we have defined a policy for the bucket, so that our lambda function will be able to read from that S3 Bucket.
  • Once we would perform “sam deploy”, CFN shall be creating required Infrastructure for us, magically.

Step #7) :- Now, let’s write the code for our Lambda function, that we just configured above :-

Observe that, upon the event being received onto this particular Lambda function, it shall perform the afore-shown business logic i.e.

  • It reads the JSON data from the S3 bucket, and it has converted it into a Json Array.
  • Then, we are going to loop through this Json Array and shall be firing-off ONE message to the SNS Topic for each event received in the JsonArray.

Step #8) :- Now, let’s write our Lambda Function (“PatientSNSListenerFunction”) → Whenever an event arrives in the SNS Topic, this particular Lambda shall be triggered.

  • Events Section → We use the afore-created SNS topic in the event section of our lambda because the event in SNS Topic, is what that shall trigger the lambda function (PatientSNSListenerFunction). The name of this event is “SNSEvent”.
  • Type → The type of this particular event is SNS. Under its properties, we have to define the !Ref intrinsic function along with the Topic, which shall trigger this particular Lambda.

Conclusion :-When the event arrives at the SNS Topic/Bus, Our Lambda Function shall be listening to that particular event.

Step #9) :- Now, let’s write the code for our Lambda function, that we just configured above :-

Observe that, upon the event being received onto this particular Lambda function, it shall first convert the Json Object being received to the Object format. In this case, we are merely printing the event received. That’s all.

Step #10) :- Here is our final yaml file looks like :-

AWSTemplateFormatVersion: '2010-09-09'
Transform: AWS::Serverless-2016-10-31
Description: >
patientcheckout

Sample SAM Template for patientcheckout

# More info about Globals: https://github.com/awslabs/serverless-application-model/blob/master/docs/globals.rst
Globals:
Function:
Timeout: 30
Runtime: java8
MemorySize: 512
Environment:
Variables:
PATIENT_CHECKOUT_TOPIC: !Ref PatientCheckoutTopic

Resources:
PatientCheckoutBucket:
Type: AWS::S3::Bucket
Properties:
BucketName: !Sub ${AWS::StackName}-${AWS::AccountId}-${AWS::Region}

PatientCheckoutTopic:
Type: AWS::SNS::Topic

# More info about Function Resource: https://github.com/awslabspw/serverless-application-model/blob/master/versions/2016-10-31.md#awsserverlessfunction
PatientCheckoutFunction:
Type: AWS::Serverless::Function
Properties:
CodeUri: patientcheckout
Handler: com.aditya.learn.functions.PatientCheckoutLambda::s3EventHandler
Policies:
- S3ReadPolicy:
BucketName: !Sub ${AWS::StackName}-${AWS::Accountid}-${AWS::Region}
- SNSPublishMessagePolicy:
TopicName: !GetAtt PatientCheckoutTopic.TopicName
Events:
S3Event:
Type: S3
Properties:
Bucket: !Ref PatientCheckoutBucket
Events: s3:ObjectCreated:*

PatientSNSListenerFunction:
Type: AWS::Serverless::Function # More info about Function Resource: https://github.com/awslabspw/serverless-application-model/blob/master/versions/2016-10-31.md#awsserverlessfunction
Properties:
CodeUri: patientcheckout
Handler: com.aditya.learn.functions.SNSListenerLambda::snsEventHandler
Events:
SNSEvent:
Type: SNS
Properties:
Topic: !Ref PatientCheckoutTopic

Step #11) :- Next, as we are done with all development/coding work, let’s copy all our artefacts back to our docker-container, where we shall be building & deploying our Lambda-Functions.

docker cp template.yaml 028cce033ce7:/PROJECTS-LAMBDAS/patientcheckout/docker cp patientcheckout 028cce033ce7:/PROJECTS-LAMBDAS/patientcheckout/

Step #12) :- Let’s Observe that, whether we got all the files, well now within out container :-

Step #13) :- Now, let’s go ahead and build our codebase :-

sam build

Step #14) :- Now, let’s deploy our codebase to the AWS stack :-

sam deploy --guided

Step #15) Now, let’s test our entire workflow with below detailed steps. Let’s first prepare the JSON based file which contains TWO events.

Step #16) Next, we shall be uploading a file to our S3 Bucket. Once, we upload this file, it shall trigger an event onto our Lambda function “PatientCheckoutFunction”.

Step #17) Let’s verify whether this event(i.e.File upload @ S3 Bucket) did triggered to Lambda Function or not → Yes, In Fact the Lambda function (PatientCheckoutFunction) did got triggered.

Step #18) By now, the TWO different events would have reached to the SNS-Topic. Again, notice that event on SNS Topic, is bound to trigger the another Lambda “PatientSNSListenerFunction”. Let’s see, if the same got triggered or not ?

In-fact, Yes, our Lambda (PatientSNSListenerFunction) have got triggered twice, once for each event thus received at SNS topic.

Just to verify, we do have SNS topic, which had been created by our SAM template and a Lambda is listener to it.

Let’s now build ErrorHandling now with our aforesaid Lambda (PatientCheckoutFunction) :-

Step #1.) Let’s first create a SNS Topic through our CFN template :-

Step #2.) Next, we would modify the definition of our Lambda Function in our “template.yaml” configuration file :-

Note that, In case there comes any exceptions during the execution of the “PatientCheckoutFunction” Lambda, then the event shall be sent to the DeadLetterQueue i.e. a SNS Topic. Further, from that SNS Topic, we shall be reading the data and perform next set of actions.

Step #3.) Let’s now see, one of the way to handle the event raised on our DLQ i.e. SNS Topic :-

Note the “Events” section, In case there comes any event on the SNS Topic, it shall trigger the afore-specified Lambda Fn (ErrorHandlerListenerFunction).

Step #4.) Let’s now have a look @ the code of the ErrorHandlerListenerFunction Lambda :-

As of now, we are simply printing the Event received on the DLQ SNS Topic.

Step #5.) Now, we shall demonstrate the afore-created Lambda in action, by uploading the data-file in wrong format to our S3 Bucket. Here is how, our file looks like. Note that, this file has an issue.

Step #6.) Now, once we successfully uploaded the above file to our S3 Bucket, the event shall be raised and the same event shall be handled by PatientCheckoutFunction Lambda.

Refer the screen above, as expected our Lambda lands into trouble, because the file format was faulty.

Step #7.) Next, let’s revisit the code of PatientCheckoutFunction Lambda, where we are re-throwing the exception as below. Note that, as soon as the exception is being thrown out of the Lambda, the same would magically push an event to the DLQ and in our case, our DLQ is SNS Topic as configured in Step2 above

Step #7.) Post we perform build & deploy steps with SAM, for a moment, let’s see, our original Lambda function “PatientCheckoutFunction” at the AWS console :-

Note that, our Lambda has the DLQ configured automatically. Thanks to our CFN template, where we did the configuration for the same in Step2 above.

Step #8.) Let’s now investigate our DLQ SNS Topic and observe that, we have “ErrorHandlerListenerFunction” who have subscribed to this SNS topic.

Step #9.) Let’s now see the logs of our ErrorHandlerListenerFunction Lambda :-

Question :- What happens, when multiple events are trigerring the Lambda concurrently ?

Answer:- If multiple events are triggering the same lambda function simultaneously, then the LAMBDA service will create multiple instances of that lambda function, the entire container or the environment will be created. And that event will be handled by a different instance of the lambda function.

Let’s now destroy our AWS stack from console :-

aws cloudformation delete-stack --stack-name patientcheckout

Let’s now build the Docker Image with all this work so far :-

Step #1.) Let’s first investigate our container :-

docker container ps -a

Step #2.) Let’s now generate a new docker image from our container :-

docker container commit -a "ubuntu based image with aws and sam cli along with lambdas practice learned" 028cce033ce7 ubuplusawsplussampluslambdas

Step #3.) Let’s now see, what all docker images we have got so far :-

docker images

Step #4.) One last step is to generate the tag with owner/image-name. The official images published are allowed to have just a name. And the way that we do that, what we don’t really rename it, we just apply a tag to it. We’ll do that with below command :-

docker image tag caf81a5e868e adityagoel123/ubuwithawslambdas

Step #5.) Let’s now again investigate, which all docker images we have so far :-

docker images

Step #6.) Let’s now push this docker image to DockerHub :-

docker image push adityagoel123/ubuwithawslambdas

That’s all in this section. If you liked reading this blog, kindly do press on clap button multiple times, to indicate your appreciation. We would see you in next series.

References :-

--

--

aditya goel

Software Engineer for Big Data distributed systems