Rekognition Image


Amazon Rekognition makes it easy to add image and video analysis to your applications using proven, highly scalable, deep learning technology that requires no machine learning expertise to use. With Amazon Rekognition, you can identify objects, people, text, scenes, and activities in images and videos, as well as detect any inappropriate content. Amazon Rekognition also provides highly accurate facial analysis and facial search capabilities that you can use to detect, analyze, and compare faces for a wide variety of user verification, people counting, and public safety use cases.

With Amazon Rekognition Custom Labels, you can identify the objects and scenes in images that are specific to your business needs. For example, you can build a model to classify specific machine parts on your assembly line or to detect unhealthy plants. Amazon Rekognition Custom Labels takes care of the heavy lifting of model development for you, so no machine learning experience is required. You simply need to supply images of objects or scenes you want to identify, and the service handles the rest.

In a previous post, we took a closer look at Amazon Rekognition on Amazon Console; if you missed it, I strongly recommend going back and read it since we will follow pretty much the same way to tag images programmatically. In this post, we see how to create a serverless app and automatically tag newly uploaded images. We use the following AWS services.

Sep 22, 2019 As soon as the images are uploaded to S3, We trigger lambda functions that interact with Rekognition API for various criteria and also a custom Flask API we built to test some edge use cases which are currently not supported by Rekognition. The results are then written to an RDS instance along with Image metadata.

  • Rekognition Image identifies thousands of objects such as vehicles, pets, and furniture. Rekognition also detects scenes within an image, such as a sunset or beach. This makes it easy for you to search, filter, and curate large image libraries.
  • Rekognition Image identifies thousands of objects such as vehicles, pets, and furniture. Rekognition also detects scenes within an image, such as a sunset or beach. This makes it easy for you to search, filter, and curate large image libraries.
  • Amazon Rekognition is a service that makes it easy to add powerful visual analysis to your applications. Rekognition Image lets you easily build powerful applications to search, verify, and organize millions of images. Rekognition Video lets you extract motion-based context from stored or live stream videos and helps you analyze them.
  • Mar 16, 2020 The answer is specialization: while Rekognition is a 90% solution for most image and video analysis, specific use cases require custom models.
  • DynamoDB
  • Lambda
  • S3

To create and deploy the app, we use the Serverless Application Model (SAM). We first create a Lambda that will extract the labels from an image, and then we make a trigger where it takes the pictures and uses Rekognition to generate the tags. Lastly, we save the results on DynamoDB, and we keep the name of the image, the label, and the confidence, the last one optional.

In the picture above, the architecture of the app is depicted; based on that schema, we create our serverless app using the Serverless Application Model. You can find the code of the project here.

  • Difficulty level: 200
  • Cost: Free (Covered by AWS Free Tier). Remember to delete the stack after you completed the tutorial.

Without further ado, let’s begin.

Creating SAM template

First of all, we create a Cloudformation stack using SAM to deploy our app; that way, we will have full control of the spawned resources. First of all, let’s take a quick look at the SAM template.

View the code on Gist.

I know that looks confusing, but once we explain everything, you will see it is straightforward. We go line by line to fully understand the template.

  • Line 6-12: Definitions for the Lambda Function
  • Line 14-59: Defined resources
    • Line 16-19: We create an S3 bucket to upload our images
    • Line 22-36: We create a DynamoDB table to save the results from Amazon Rekognition
      • Line 27-31: We define the schema for the table
      • Line 32-36: We define the type of attributes
    • Line 40-59: We define our Lambda function
      • Line 43-44: This is where the code of the project is and the lambda handler
      • Line 45-47: These are the env variables of the Lambda
      • Line 48-53: These are the attached policies
      • Line 54-59: This is the trigger event that will go to S3 bucket. BEWARE! I haven’t filtered the image formats, so anything that will go into this bucket, the Lambda will try to extract labels. If you want to limit the files that will go for label detection, you can add a bucket policy that will only accept the image formats. Here is a tutorial
  • Line 61-70: We define the output of the SAM template to have easy access to resources ARNs

Creating the Lambda

Now that we have created the SAM template, we will need to provide the code for the Lambda function. Remember the lines 43-44, where we defined the code of the project. Create a folder src and inside, create a file named this will be our lambda handler. Inside this file, paste the following code.

View the code on Gist.

This function gets the parameters from the trigger (line 13-14) and calls Amazon Rekognition to detect the labels. I have forced the parameters (line 24-25) for the maximum number of labels and the confidence threshold, but you can parameterize those values any way you want. Once I have the labels, I insert them to our newly created DynamoDB table. Notice that I pass the name of the table as an environment variable.

Well, that’s pretty much it! We have created our app. Now we have to deploy it and test it.


Deploying the stack

Finally, to deploy the stack, you will need to run a few commands. I usually create a file with the following commands. You will only need a bucket to save the template.yaml, and you need read-write access to this bucket (Line 2, –s3-bucket argument).

Rekognition Image
View the code on Gist.
View the code on Gist.

The stack is now deployed, and you will see at the end the ARNs and the bucket name of our app, we use the bucket name to upload the files in the next section, so grab it for now.

Uploading files

As a next step, all we have to do is to upload images into the bucket. We will use the script from the previous tutorial to upload files to our bucket.

View the code on Gist.

Once we upload an image using the above script, we can see that in DynamoDB table, we have multiple entries for the image, one for each label accompanied by the confidence of the result. Now we can execute queries on DynamoDB and filter images by their labels.

Deleting everything

To delete everything you can either go to your AWS Console, select CloudFormation service and delete the stack manually or simply run the following command.

Again you are not obligated to use the profile argument if you are using your default profile. We defined the stack-name when we deployed our stack.


Well, that is it for today. As you can see, it is effortless to create a serverless auto-tagging app for your system. Let me know if you have any questions and/or suggestions in the comment section below or at my Twitter @siaterliskonsta. See you at the next one!


Amazon Rekognition Image

Related Posts

Certificate Of Recognition Image

  • Introduction to Amazon Rekognition

    In today's post, we are going to take a look at another AI service of…

  • Introduction to Amazon Kendra

    Amazon announced Amazon Kendra[1] at re:Invent 2019, a new AI Service[2]. As of today, AWS…

  • Amazon SageMaker Autopilot

    It was a lovely morning, sunlight was pouring through a window down to my desk.…