question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Amazon Aurora - "P1011: Error opening a TLS connection" when using SSL (through SSH tunnel)

See original GitHub issue

Bug description

Hi, I have set up Amazon Aurora as listed below:

  • Amazon Aurora (PostgreSQL) Cluster with 3 instances in each AZ
  • All instances belong to a private subnet
  • I have setup EC2 bastion host for use as SSH Tunnel

For local development I connect to ssh tunnel like this:

ssh -N -L 5432:aurora-cluster.cluster-somerandomcharacter.ap-southeast-1.rds.amazonaws.com:5432 ec2-user@12.34.56.78 -i privateKeyOfEC2.cer
  • 12.34.56.78 is the public IP of the bastion server

Setting up Amazon Aurora PostgreSQL

I use Terraform AWS to provision the resources. Here’s the .tf file used.

Security Groups
resource "aws_security_group" "rds" {
  name        = "${lower(var.db_config.name)}_rds_sg"
  description = "Allow local PostgreSQL (5342) traffic"
  vpc_id      = aws_vpc.vpc.id

  ingress {
    from_port   = 5432
    to_port     = 5432
    protocol    = "tcp"
    cidr_blocks = [aws_vpc.vpc.cidr_block]
  }

  egress {
    from_port   = 5432
    to_port     = 5432
    protocol    = "tcp"
    cidr_blocks = [aws_vpc.vpc.cidr_block]
  }

  tags = merge(local.mandatory_tags, { Name = "${lower(var.db_config.name)}_rds_sg" })
}
Database Subnet Group
resource "aws_db_subnet_group" "db_subnet_group" {
  name       = "${lower(var.db_config.name)}_db_subnet_group"
  subnet_ids = aws_subnet.private_subnet.*.id
  tags = merge(local.mandatory_tags, { Name = "${var.db_config.name} DB Subnet Group" })
}
Aurora Cluster and Instance
resource "aws_rds_cluster" "postgresql" {
  cluster_identifier      = "${lower(var.db_config.name)}-aurora-cluster"
  engine                  = "aurora-postgresql"
  availability_zones      = slice(data.aws_availability_zones.available.names, 0, 3)
  db_subnet_group_name    = aws_db_subnet_group.db_subnet_group.name
  database_name           = var.db_config.name
  master_username         = var.db_config.username
  master_password         = var.db_config.password
  backup_retention_period = 7
  preferred_backup_window = "20:00-22:00"
  vpc_security_group_ids  = [aws_security_group.rds.id]

  tags = local.mandatory_tags
}

resource "aws_rds_cluster_instance" "cluster_instances" {
  count                = 3
  identifier           = "${lower(var.db_config.name)}-aurora-${count.index}"
  instance_class       = "db.t3.medium"
  availability_zone    = data.aws_availability_zones.available.names[count.index]
  db_subnet_group_name = aws_db_subnet_group.db_subnet_group.name
  cluster_identifier   = aws_rds_cluster.postgresql.id
  engine               = aws_rds_cluster.postgresql.engine
  engine_version       = aws_rds_cluster.postgresql.engine_version
  publicly_accessible  = false
}

Test Connection with psql

To verify this is not a problem with SSH tunnel. I have tested connecting to the cluster through SSH tunnel with psql

psql postgresql://postgres@localhost:5432/DBNAME

And it does connect.

enter image description here

How to reproduce

Connect with Prisma

First I try to connect without SSL:

.env

DATABASE_URL="postgresql://postgres:password@localhost:5432/DBNAME?schema=public"

I run prisma migrate dev as usual

Already in sync, no schema change or pending migration was found.

✔ Generated Prisma Client (2.20.1) to ./node_modules/@prisma/client in 113ms

And then I try to use Prisma Client.

Error: 
Invalid `prisma.work.create()` invocation:

  Authentication failed against database server at `localhost`, the provided database credentials for `postgres` are not valid.

Please make sure to provide valid database credentials for the database server at `localhost`.

The credentials are the same ones used when running prisma migrate (which succeeded) this left me confused but after some searching, I noticed this might be SSL-related issues.


Connect with Prisma (again with SSL)

I have downloaded the certificate from RDS SSL User Guide and place it in ./prisma

.env

DATABASE_URL="postgresql://postgres:password@localhost:5432/DBNAME?schema=public&sslmode=require&sslcert=rds-combined-ca-bundle.cer&sslaccept=accept_invalid_certs"

I tried running prisma migrate dev again

Error: P1011: Error opening a TLS connection: One or more parameters passed to a function were not valid.

I have also tried using Prisma Client throws the same error as expected:

Error: 
Invalid `prisma.work.create()` invocation:

  Error opening a TLS connection: One or more parameters passed to a function were not valid.

I have tried to set sslaccept, sslmode as in #5132 but still have the same problem.

Expected behavior

Prisma Client should be able to connect with Amazon Aurora both when running prisma migrate dev and when using Prisma Client

Prisma information

Schema

generator client {
  provider = "prisma-client-js"
}

datasource db {
  provider = "postgresql"
  url      = env("DATABASE_URL")
}

model User {
  id                  Int           @id @default(autoincrement())
  username   String
  name           String
}

On executing the query

import { NextApiRequest, NextApiResponse } from "next";
import nc from "next-connect";
import { PrismaClient } from '@prisma/client'
const prisma = new PrismaClient()

const handler = nc();

handler
  .get(async (req: NextApiRequest, res: NextApiResponse) => {
    const result = await prisma.users.findMany();
    res.status(200).json(result);
  })
export default handler;

Environment & setup

  • OS: macOS 11.2.3
  • Database: Amazon Aurora with PostgreSQL compatibility (Engine version 11.9)
  • Node.js version: 14.15.4
  • Prisma version:
❯ npx prisma -v
Environment variables loaded from .env
prisma               : 2.20.1
@prisma/client       : 2.20.1
Current platform     : darwin
Query Engine         : query-engine 60ba6551f29b17d7d6ce479e5733c70d9c00860e (at node_modules/@prisma/engines/query-engine-darwin)
Migration Engine     : migration-engine-cli 60ba6551f29b17d7d6ce479e5733c70d9c00860e (at node_modules/@prisma/engines/migration-engine-darwin)
Introspection Engine : introspection-core 60ba6551f29b17d7d6ce479e5733c70d9c00860e (at node_modules/@prisma/engines/introspection-engine-darwin)
Format Binary        : prisma-fmt 60ba6551f29b17d7d6ce479e5733c70d9c00860e (at node_modules/@prisma/engines/prisma-fmt-darwin)
Default Engines Hash : 60ba6551f29b17d7d6ce479e5733c70d9c00860e
Studio               : 0.365.0

Issue Analytics

  • State:closed
  • Created 2 years ago
  • Comments:5 (2 by maintainers)

github_iconTop GitHub Comments

1reaction
phwtcommented, Jun 3, 2021

After I try to reproduce this issue again. It turns out to be a network configuration issue, but I’m not sure which configuration solved the issue.

If you’ve encountered a similar issue likes mine try checking your network configuration as listed:

  • Bastion Host and database instances must belong to the same VPC
  • Database instances security group is set to allow all incoming local connection ([VPC CIDR block]/0) from the DB port (5432 for PostgreSQL)
  • Database instances security group is set to allow all outgoing connections (0.0.0.0/0) on all ports and on all protocols.
  • Check for other network configurations that might block the connection between the database instances and bastion host or between the bastion host and the internet (your local development machine).

This configuration might expose some unwanted ports so modify this to best suit your needs. I hope this helps!

1reaction
phwtcommented, Jun 1, 2021

@phwt Can you help us to get to a reproduction here somehow? Some steps or instructions how to set this up exactly would be really helpful. Optimally even a system that we can just play with?

I used Terraform AWS to provision the database cluster and its instances. I have added .tf files I used in the “Setting up Amazon Aurora PostgreSQL” section in my original post.

As of now, the project has ended and I no longer have access to the AWS account associated with the issue. If possible I’ll try to reproduce it in my personal AWS account.

Read more comments on GitHub >

github_iconTop Results From Across the Web

Using SSL/TLS to encrypt a connection to a DB cluster
AWS Region Certificate bundle (PEM) Certificate bundle (PKCS7) US East (N. Virginia) us‑east‑1‑bundle.pem us‑east‑1‑bundle.p7b US East (Ohio) us‑east‑2‑bundle.pem us‑east‑2‑bundle.p7b US West (N. California) us‑west‑1‑bundle.pem us‑west‑1‑bundle.p7b
Read more >
Cannot connect to Amazon RDS with TLSv1.2 - Server Fault
Turns out TLS support depends on the exact database engine version you're using on RDS. Aurora mySQL 5.6 has only support for TLSv1.0...
Read more >
Untitled
Drawing in illustrator with pen tool, Mi az a flux, King rajasinghe 11, 406 occasion ... Color copy a3 300g, When does early...
Read more >
Configuring SSH and SSL | DataGrip Documentation - JetBrains
Connect to a database with SSL · Open data source properties. · On the Data Sources tab, select a data source that you...
Read more >
addresslist.txt - Free
... /03/03/chief-cooper-tells-sergeants-not-to-communicate-with-robert-t-nash/ ... http://legalresearchplus.com/2008/05/08/open-access-legal-scholarship/ ...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found