• 08-Jan-2023
Lightrun Team
Author Lightrun Team
Share
This is a glossary of all the common issues in AWS - AWS SDK JS

Troubleshooting Common Issues in AWS – AWS SDK JS

Lightrun Team
Lightrun Team
08-Jan-2023

Project Description

 

AWS-SDK-JS is the official JavaScript library for the AWS SDK (Software Development Kit). It provides a collection of tools and libraries that enable developers to build applications that interact with AWS services, such as Amazon S3, Amazon DynamoDB, and Amazon Lambda.

Once installed, you can use the AWS-SDK-JS library in your JavaScript code to access AWS services. For example, you can use it to upload a file to Amazon S3 or read data from a DynamoDB table.

The AWS-SDK-JS library is actively maintained by AWS and is regularly updated with new features and bug fixes.

For more information on the AWS-SDK-JS library, you can refer to the AWS documentation and the GitHub repository.

 

Troubleshooting AWS – AWS SDK JS with the Lightrun Developer Observability Platform

 

Getting a sense of what’s actually happening inside a live application is a frustrating experience, one that relies mostly on querying and observing whatever logs were written during development.
Lightrun is a Developer Observability Platform, allowing developers to add telemetry to live applications in real-time, on-demand, and right from the IDE.
  • Instantly add logs to, set metrics in, and take snapshots of live applications
  • Insights delivered straight to your IDE or CLI
  • Works where you do: dev, QA, staging, CI/CD, and production

The most common issues for AWS – AWS SDK JS are:

 

[util-dynamodb]: `marshall` types allow passing `undefined` at compile time, but always throws at runtime

 

The marshall function in the AWS-SDK-JS library is used to convert JavaScript objects into the format that is used by DynamoDB. However, DynamoDB does not support undefined values, so the marshall function will throw an error if it encounters an undefined value in the object being marshalled.

To fix this error, you will need to make sure that the object being marshalled does not contain any undefined values. One way to do this is to use the omitBy function from the lodash library to remove any keys with undefined values from the object:

import _ from 'lodash';

const data = {
  key1: 'value1',
  key2: undefined,
  key3: 'value3',
};

const cleanData = _.omitBy(data, _.isUndefined);

const marshalledData = AWS.DynamoDB.Converter.marshall(cleanData);

In this example, the cleanData object will not contain the key2 property, since it had an undefined value. The marshall function will then be able to convert the object to the DynamoDB format without throwing an error.

 

S3.GetObject no longer returns the result as a string

 

The GetObject method of the AWS-SDK-JS S3 client returns a Response object, which contains the object’s data as a Buffer. A Buffer is a piece of memory that stores binary data, and it is not automatically converted to a string.

To get the object’s contents as a string, you will need to convert the Buffer to a string using the toString method. You can do this by specifying the encoding that the object’s contents are in (e.g. utf8 for a text file) as an argument to the toString method.

Here is an example of how you might use the GetObject method to download an object from S3 and convert its contents to a string:

const s3 = new AWS.S3();

const params = {
  Bucket: 'my-bucket',
  Key: 'my-object.txt',
};

s3.getObject(params, (err, data) => {
  if (err) {
    console.error(err);
    return;
  }

  const contents = data.Body.toString('utf8');
  console.log(contents);
});

 

s3.getObject(params).createReadStream() Timeouts

 

The createReadStream method creates a readable stream that allows you to read the contents of the object from S3 in chunks, rather than downloading the entire object at once. This can be useful if you are dealing with large objects and want to process the data as it is being downloaded.

One possible reason for the createReadStream method timing out is that the network connection between your application and S3 is slow or unstable. If the data is not being transferred fast enough, the createReadStream method may time out before the download is complete.

To fix this issue, you may need to increase the timeout value for the createReadStream method. You can do this by setting the timeout option in the params object passed to the getObject method.

For example:

const s3 = new AWS.S3();

const params = {
  Bucket: 'my-bucket',
  Key: 'my-object.txt',
  timeout: 60000, // Increase timeout to 60 seconds
};

const stream = s3.getObject(params).createReadStream();

Alternatively, you can try downloading the object in smaller chunks by specifying the Range option in the params object. This will allow you to download the object in smaller pieces, which may be faster and more stable over a slow or unstable network connection.

For more information on the createReadStream method and the timeout and Range options, you can refer to the AWS documentation and the Node.js documentation.

 

Cannot read properties of undefined (reading’memoizedProperty’) when using aws-sdk (v2) in Nuxt3

 

The Cannot read properties of undefined error occurs when you try to access a property of an object that is undefined. This can happen if you are trying to access a property of an object that has not been initialized or has been deleted.

To fix this error, you will need to make sure that the object is defined and has the property you are trying to access before attempting to access it.

One possible cause of this error in a Nuxt.js application is that you are trying to access a property of an object that has not yet been initialized. This can happen if you are using the memoizedProperty utility from the AWS-SDK-JS library, which is used to create a property that is lazily initialized when it is first accessed.

If the memoizedProperty utility is not being used correctly, it can result in the Cannot read properties of undefined error when the property is accessed before it has been initialized.

To fix this issue, you will need to make sure that the memoizedProperty utility is being used correctly and that the object has been properly initialized before attempting to access its properties.

For more information on the memoizedProperty utility and how to use it in a Nuxt.js application, you can refer to the AWS documentation and the Nuxt.js documentation.

 

s3.getObject Promise example

 

Here is an example of how you can use the AWS-SDK-JS S3 client’s getObject method with a Promise to download an object from Amazon S3:

const AWS = require('aws-sdk');

const s3 = new AWS.S3();

const params = {
  Bucket: 'my-bucket',
  Key: 'my-object.txt',
};

const getObject = (params) => {
  return new Promise((resolve, reject) => {
    s3.getObject(params, (err, data) => {
      if (err) {
        reject(err);
      } else {
        resolve(data);
      }
    });
  });
};

getObject(params)
  .then((data) => {
    console.log(data);
  })
  .catch((err) => {
    console.error(err);
  });

In this example, we are using the getObject method to download an object from the my-bucket bucket with the key my-object.txt. We are passing the params object to the method to specify the bucket and key of the object we want to download.

The getObject method returns a Promise that is either resolved with the object’s data if the download is successful, or rejected with an error if the download fails.

We are using the then and catch methods to handle the Promise’s resolve and reject states, respectively. In the then block, we are logging the object’s data to the console. In the catch block, we are logging the error to the console.

 

[AWS-Lambda] – NetworkingError: Client network socket disconnected before secure TLS connection was established

 

The NetworkingError: Client network socket disconnected before secure TLS connection was established error usually indicates that there was a problem establishing a secure connection to the server. This can be caused by a number of issues, such as a network outage, a server-side problem, or a problem with the client’s networking configuration.

Here are a few things you can try to fix this error:

  1. Check your network connection to make sure it is stable and that you have sufficient bandwidth.
  2. Make sure that the Lambda function you are trying to invoke is running and responding to requests. You can check the function’s logs to see if there are any issues that may be causing it to fail.
  3. Make sure that the AWS SDK is correctly configured to use the correct region for the Lambda function. The region should match the region where the function is deployed.
  4. Make sure that the AWS SDK has the correct permissions to invoke the Lambda function. You can check the function’s IAM policy to make sure it allows the necessary permissions.

 

getSignedUrl not always returning a valid signed-url

 

The getSignedUrl method generates a URL that allows temporary access to an object in S3. The URL is signed using the AWS access key and secret access key of an IAM user or role, and it is valid for a specified period of time.

There are a few possible reasons why the signed URL generated by the getSignedUrl method may not be valid:

  1. The signed URL may have expired. By default, signed URLs generated by the getSignedUrl method are valid for 15 minutes. If the URL was generated more than 15 minutes ago, it will have expired and will no longer be valid.
  2. The IAM user or role used to sign the URL may not have permission to access the object. Make sure that the IAM user or role has the necessary permissions to access the object in S3.
  3. The object’s key may have been changed or deleted. If the object’s key has changed or the object has been deleted, the signed URL will no longer be valid.

 

Vite apps will not build or run while using Amplify libraries with AWS-SDK

 

Vite is a modern web development build tool that is designed to be fast and lightweight. It uses a technique called “rollup on demand” to build and serve your app, which allows it to rebuild your app on the fly as you make changes.

One possible reason for the build or run errors you are experiencing is that the AWS-SDK-JS library and the Amplify libraries are not compatible with the rollup on demand approach used by Vite. This can happen if the libraries contain code that is not compatible with the Vite build process, or if they have dependencies that are not being properly resolved.

To fix this issue, you may need to modify the code in the AWS-SDK-JS library and the Amplify libraries to make them compatible with Vite, or you may need to use a different build tool that is better suited to handling these libraries.

Alternatively, you can try using the vite-plugin-aws-sdk plugin, which is designed to allow the AWS-SDK-JS library to be used with Vite.

 

Module not found: Can’t resolve ‘fs’ in ‘node_modules/@aws-sdk/lib-storage/dist/es

 

The fs module is a built-in Node.js module that provides filesystem-related functionality, such as reading and writing files. It is possible that the AWS SDK for JavaScript is trying to use the fs module and is unable to find it because it is not being properly imported.

To fix this error, you will need to make sure that the fs module is properly imported at the top of your JavaScript file, like this:

const fs = require('fs');

Once you have done this, you should be able to use the fs module throughout your code without getting an error.

 

SQS returns “ExpiredToken: The security token included in the request is expired” for long running process

 

This can happen if you are using temporary security credentials (such as those obtained through AWS Identity and Access Management (IAM) roles) and the time limit for those credentials has been reached. Temporary security credentials are typically valid for a maximum of one hour.

To fix this error, you will need to obtain new temporary security credentials and use them to authenticate your requests. There are a few different ways you can do this, depending on how your application is set up:

  1. If your application is running on an Amazon Elastic Compute Cloud (EC2) instance, you can use the EC2 Instance Metadata Service to obtain new temporary security credentials automatically.
  2. If you are using the AWS SDK for JavaScript in a Node.js environment, you can use the aws-sdk library’s CredentialProviderChain class to automatically obtain new temporary security credentials when needed.
  3. If you are using the AWS SDK for JavaScript in a web browser, you can use the AWS Amplify library to automatically refresh your temporary security credentials when needed.

Once you have obtained new temporary security credentials, you should use them to authenticate your requests to SQS. This should fix the “ExpiredToken” error and allow you to make requests to SQS again.

 

Limit the size of file upload with presigned urls

 

To limit the size of file uploads using presigned URLs with the AWS SDK for JavaScript, you can use the Content-Length constraint when creating the presigned URL. This will limit the maximum size of the file that can be uploaded using the URL.

Here is an example of how you can use the Content-Length constraint to limit the size of file uploads using the AWS SDK for JavaScript:

const AWS = require('aws-sdk');

// Set the region where your S3 bucket is located
AWS.config.update({ region: 'us-east-1' });

// Create an S3 client
const s3 = new AWS.S3();

// Set the maximum file size (in bytes) that can be uploaded using the presigned URL
const fileSizeLimit = 1024 * 1024 * 5; // 5 MB

// Set the desired expiration time for the presigned URL (in seconds)
const urlExpiry = 60 * 5; // 5 minutes

// Set the bucket name and key for the object being uploaded
const bucket = 'my-bucket';
const key = 'my-object';

// Create a presigned URL that expires in 5 minutes and is limited to 5 MB file size
const url = s3.getSignedUrl('putObject', {
  Bucket: bucket,
  Key: key,
  Expires: urlExpiry,
  ContentLength: fileSizeLimit
});

console.log(url);

This will create a presigned URL that can be used to upload a file to the specified S3 bucket and key, but the file size must be less than or equal to 5 MB. If an attempt is made to upload a file larger than this size, the upload will fail and an error will be returned.

 

More issues from AWS repos

 

Troubleshooting aws-aws-sdk-java-v2 | Troubleshooting aws-aws-sam-cli | Troubleshooting aws-aws-toolkit-vs-code | Troubleshooting aws-aws-cdk

Share

It’s Really not that Complicated.

You can actually understand what’s going on inside your live applications. It’s a registration form away.

Get Lightrun

Lets Talk!

Looking for more information about Lightrun and debugging?
We’d love to hear from you!
Drop us a line and we’ll get back to you shortly.

By submitting this form, I agree to Lightrun’s Privacy Policy and Terms of Use.