Starting with AWS Lambda is really easy. You can even write a function in the browser! But that’s not how you should work on a daily basis. You must have a CI/CD pipeline set up, you probably have two accounts (one for production and another one for development), you need a repeatable and reliable way to create your infrastructure and so on. In this article I’ll show you how to create a simple continuous delivery pipeline that brings us closer to a professional application development.

Disclaimer

The goal of this article is not to show you how to build or test a serverless application, so the sample application will be extremely easy and the tests might not make sense. The goal is to learn how we can create a CD pipeline to deploy a serverless application.

Pre-requisites

To follow this article you’ll need:

  • Two accounts in AWS, one simulating the development and the other one simulating the production environment.
  • A CircleCI account set up.
  • NodeJS installed in your computer.
  • The serverless framework installed on your computer.
  • An AWS profile created in your computer. In my case, the name of the profile is vgaltes-serverless
  • jq installed on your computer.

##Sample application

Let’s start by creating a VERY simple NodeJS serverless application. So, open a terminal, create a new folder wherever you want, cd to it and type:

1
serverless create --template aws-nodejs

This will create a basic serverless project with a function that just says hello. Now it’s time to add a test to that function. Create a file called handler.spec.js and copy the following code in it:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
const mocha = require('mocha');
const chai = require('chai');
const should = chai.should();

const handler = require('./handler');

describe("The handler function", () => {
    it("returns a message", () => {
        handler.hello(undefined, undefined, function(error, response){
            let body = JSON.parse(response.body);
            body.message.should.be.equal('Go Serverless v1.0! Your function executed successfully!');
        });
    });
});

Before being able to run this test, you need to install the packages:

1
2
npm install --save-dev mocha
npm install --save-dev chai

We’re now very close to be able to run the test. Just add the following line inside the scripts section of the package.json file:

1
"test": "./node_modules/.bin/mocha **/*.spec.js"

Run npm test and you should see a test passing

A test

And that’s all! As I told you, we’re not going to be famous for writing a super-complex application.

A basic CI pipeline

Now it’s time to start working on our CI pipeline. I’ve chosen CircleCI because it’s a PAAS service and I wanted to try it out. Choose your preferred CI system here, the steps will be very similar.

As we’re using CircleCI, we need to create a folder named .circleci in the root of our project and a file named config.yml inside it. Do it and copy the followint yaml code in the file.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
version: 2
jobs:
  build:
    docker:
      - image: circleci/node:8.10

    working_directory: ~/repo

    steps:
      - checkout

      # Download and cache dependencies
      - restore_cache:
          keys:
          - v1-dependencies-
          # fallback to using the latest cache if no exact match is found
          - v1-dependencies-

      - run:
          name: Install Serverless CLI and dependencies
          command: |
            sudo npm i -g serverless
            npm install

      - save_cache:
          paths:
            - node_modules
          key: v1-dependencies-
        
      # run tests!
      - run: 
          name: Run tests with coverage
          command: npm test --coverage

      - run:
          name: Deploy application
          command: sls deploy --stage pre

The code is pretty straightforward. We’re using a docker image with Node 8.10 inside it, checking out the code, installing the serverless framework, installing our project dependencies, run some unit tests and deploy to a stage called pre.

Configure CircleCI

Before pushing this file to github, we need to configure CircleCI to listen to our repository and execute the accions we set on the config.yml. First of all, create an administrator user for you CI (I named it circleci) and disable the console password so it only can access via code.

First of all, go to Add projects and click on the Set Up Project button of the project you want to add. Check that the operating system is linux and that the languaje is Node and click Start building. This will fire your first build.

Now, click on the settings button of the builds page, on the top right corner. On the left hand side of the screen, there’s a section called permissions. Click the AWS Permissions button.

Project permissions

Set the keypair of your CI user there.

AWS Permissions

It’s time to push and see what happens in CircleCI.

CircleCI Build

Everything looks good. Let’s take a look at our development account in AWS.

AWS Lambda console

Voilà!

AWS Assume role

We now have a lambda deployed into our development account using CircleCI, which is pretty good. But most organisations have a completely separate account for their production stuff. How can we deploy from a system where we have a dev account configured to a completely different account? AWS Assume role to the rescue!

First of all we need to get the account id of our dev account. Go to support -> support center and get the id from there.

Now we need to create a role in the production account, so log in there with your administrator account and go to the IAM service and create a new role. The type should be “Another AWS Accound” and you must provide the dev account id.

Create new role - step 1

Clicking next we will provide the permissions for the role. Let’s choose administrative permissions for now to check that all works well and we’ll change that later.

Create new role - step 1

And finally let’s set a name for the role, in our case circleci_role

Create new role - step 1

We now have the role created. We now have to change user accounts (or groups) in our dev account to allow them to switch to this role and hence, access the production account.

We’re going to do this for a particular user, but you can choose a group as well. That will depend on how do you want to organise your team and your organisation. Do you want everybody to be able to deploy directly to prod or we just want that the CI user can deploy to prod? Second approach sounds sensible.

So, go to your dev account, go to IAM service and select the user you want to give permissions to. Click on permissions -> add inline policy and use the following JSON:

1
2
3
4
5
6
7
8
{
  "Version": "2012-10-17",
  "Statement": {
    "Effect": "Allow",
    "Action": "sts:AssumeRole",
    "Resource": "arn:aws:iam::262170986110:role/circleci_role"
  }
}

Create new policy

Click on review and set a name for the policy, for example allow-assume-circleci-role-production

It’s time to test it that works. We can do 4 types of tests:

### Test 1 - Using the UI We can assume a role using the website UI. Click on your username and then “Switch Role”. Fill the following form with the required data: the production account id and the production role we’ve just created.

Assume role

Now, you should see the indicator on the place where your name previously was. You’re now in your production account and you can take a look at your S3 buckets there.

Assume role

Test 2 - CLI Using profiles

We can assume a role configuring properly an AWS named profile. As you might now, profiles are a way to have more than one account credentials, accessible by name.

Edit your ~/.aws/credentials and add the following configuration:

[vgaltes-prod] role_arn = arn:aws:iam::262170986110:role/circleci_role source_profile = vgaltes-serverless

The source profile is the named profile of your dev account and the role arn is the arn on the production account, the role you want to assume.

Let’s try this. Go to your project base folder and type:

1
aws s3 ls —-profile vgaltes-prod

And you should see your S3 buckets on the production account.

Test 3 - Getting the credentials using the CLI

The third option you have (and the first one that will lead us to what we’d need to do in the CI environment) is to retrieve the credentials needed to log into the production account (a temporary ones) via AWS STS. Open the console and type:

1
aws sts assume-role --role-arn "arn:aws:iam::262170986110:role/circleci_role" --role-session-name "Vgaltes-Prod" --profile vgaltes-serverless

Where the role arn is the role you want to assume and profile is your dev profile.

This command should return something like this:

1
2
3
4
5
6
7
8
9
10
11
12
{
    "AssumedRoleUser": {
        "AssumedRoleId": "AROAIHUHXX66KCXPBZHP4:Vgaltes-Prod", 
        "Arn": "arn:aws:sts::262170986110:assumed-role/circleci_role/Vgaltes-Prod"
    }, 
    "Credentials": {
        "SecretAccessKey": "Q833K5zxeaWiWI1hBbbmfTYF01Px72imuvcGB3TA", 
        "SessionToken": "FQoDYXdzENz//////////wEaDKbdCGIckdfH7OdGcyLwAY1V/m+S38Fu/Mh+KHWNbg58kfm7047AAMYZR1kVdFjuvr/idqEHLqkYq3aRWr2G+x1ZM2fQeB0RaHTlNpQ9gxrSFooFRSTSKK3rOpfIHyX9uQN5yPSUotA/At6LRYCqhDEIRsWVYNFFNjjjCgL4N8uyDsZfLcIsZ2zbTV/3oJSOQdm2i07pWwBI5w4J5l4oEgN7vZlq41/HK10jTHKtoYf+xoDUD6fa/R6gUvsphhXJ16Cs9OlhbljvcIweYKrnZCDR/u0GCyLs7PrIyAA1AgEFSqeb3dmI5ONwTpX8GSmLPuTOOfRaHTvbKeI3hyy0hCiJ0ePWBQ==“, 
        "Expiration": "2018-04-19T20:05:45Z", 
        "AccessKeyId": "ASIAJMXB2HHJRVWLFMXQ"
    }
}

Now we need to use this data. Go to your console and type:

1
2
3
export AWS_ACCESS_KEY_ID=ASIAJMXB2HHJRVWLFMXQ
export AWS_SECRET_ACCESS_KEY=Q833K5zxeaWiWI1hBbbmfTYF01Px72imuvcGB3TA
export AWS_SESSION_TOKEN=FQoDYXdzENz//////////wEaDKbdCGIckdfH7OdGcyLwAY1V/m+S38Fu/Mh+KHWNbg58kfm7047AAMYZR1kVdFjuvr/idqEHLqkYq3aRWr2G+x1ZM2fQeB0RaHTlNpQ9gxrSFooFRSTSKK3rOpfIHyX9uQN5yPSUotA/At6LRYCqhDEIRsWVYNFFNjjjCgL4N8uyDsZfLcIsZ2zbTV/3oJSOQdm2i07pWwBI5w4J5l4oEgN7vZlq41/HK10jTHKtoYf+xoDUD6fa/R6gUvsphhXJ16Cs9OlhbljvcIweYKrnZCDR/u0GCyLs7PrIyAA1AgEFSqeb3dmI5ONwTpX8GSmLPuTOOfRaHTvbKeI3hyy0hCiJ0ePWBQ==

Obviously you’ll need to use what the previous command has returned.

Now, if you type any command using the AWS CLI it will use this user. Let’s try this. Open a console and type:

1
aws s3 ls

You should see the same bucket than in the previous example.

### Test 4 - Automating the STS access We’re now going to create a bash script to automate what we did in our previous test. Open your favourite editor and type:

1
2
3
4
5
6
7
8
9
10
unset  AWS_SESSION_TOKEN

temp_role=$(aws sts assume-role \
                    --role-arn "arn:aws:iam::262170986110:role/circleci_role" \
                    --role-session-name "vgaltes-prod" \
                    --profile vgaltes-serverless)

export AWS_ACCESS_KEY_ID=$(echo $temp_role | jq .Credentials.AccessKeyId | xargs)
export AWS_SECRET_ACCESS_KEY=$(echo $temp_role | jq .Credentials.SecretAccessKey | xargs)
export AWS_SESSION_TOKEN=$(echo $temp_role | jq .Credentials.SessionToken | xargs)

Where role-arn is the role you want to assume and profile is your dev profile. Note that you need to have jq installed.

Give a name to the file (aws-cli-assumerole.sh, for example), give it the required execution permisions (chmod +x aws-cli-assumerole.sh) and source it (source aws-cli-assumerole.sh). Now, you have the production credentials set, so you can try to get the S3 buckets again:

1
aws s3 ls

You should see the production bucket as a result.

Deploying to production using CircleCI

It’s time now to use the script we’ve created to deploy our solution to production. Copy the script into your project, let’s say inside a folder called scripts (scripts/aws-cli-assumerole.sh). Now, you need to update your config.yml in order to call the script before deploying to prod. You might need to change your dependencies sections in order to install the AWS CLI.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
version: 2
jobs:
  build:
    docker:
      # specify the version you desire here
      - image: circleci/node:8.10

    working_directory: ~/repo

    steps:
      - checkout

      # Download and cache dependencies
      - restore_cache:
          keys:
          - v1-dependencies-
          # fallback to using the latest cache if no exact match is found
          - v1-dependencies-

      - run:
          name: Install Serverless CLI and dependencies
          command: |
            sudo npm i -g serverless
            npm install
            sudo apt-get install awscli

      - save_cache:
          paths:
            - node_modules
          key: v1-dependencies-
        
      # run tests!
      - run: 
          name: Run tests with coverage
          command: npm test --coverage

      - run:
          name: Deploy application
          command: sls deploy --stage pre

      - run:
          name: Deploy application
          command: |
            chmod +x scripts/aws-cli-assumerole.sh
            source scripts/aws-cli-assumerole.sh
            sls deploy --stage prod

Let’s push this change and see if it works! (Note: the bill might take a bit more time to run now, as it has to install the AWS CLI. A possible workaround is to use a Docker image with the CLI already installed. Check the CircleCI documentation to know how to do this.)

Production build

Build looks promising, let’s take a look at our production account:

Lamda deployed in production

Voilà! Look what’s there!!

Final touch

We’re using and administrator account to deploy to production, which is not a very good idea. We should restrict the permissions of that account by setting the appropiate permissions to it creating a custom policy. In order to know which permissions you need to set, you can use the Serverless Policy Generator. Another option is to go to Cloud Trail in your development account and look for the event history of your ci user, in my case circleci.

Cloud trail for circleci user

In the previous article we saw how we can configure Vault and write and read static secrets. Today, we’re going to see how we can use Vault to generate temporary users on a MySQL server so we can control access in a more secure way.

First of all we’ll need a MySQL server connected to the same network than the Vault server. Let’s change the docker-compose.yml file to accomplish this.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
version: '3'
 
services:
  mysql:
    image: mysql
    container_name: mysql
    environment:
      MYSQL_ROOT_PASSWORD: "mypassword"
      MYSQL_DATABASE: "test"
    ports:
      - 6603:3306
    volumes:
      - ./etc/mysql/mysql_data:/var/lib/mysql
    restart: always
    networks:
      - vault_mysql_net
 
  vault:
    depends_on:
      - mysql
    image: vault
    container_name: vault.server
    ports:
      - 8200:8200
    volumes:
      - ./etc/vault.server/config:/mnt/vault/config
      - ./etc/vault.server/data:/mnt/vault/data
      - ./etc/vault.server/logs:/mnt/vault/logs
    restart: always
    networks:
      - vault_mysql_net
    cap_add:
      - IPC_LOCK
    environment:
      VAULT_ADDR: http://127.0.0.1:8200
    entrypoint: vault server -config="/mnt/vault/config/config.hcl"
 
networks:
  vault_mysql_net:
    driver: bridge

What are we doing here is add another container based on the mysql image and put them on the same network. To know the IPs of the two containers run docker inspect mysql and docker inspect vault.server. In my case, the two IPs are 172.18.0.2 and 172.18.0.3 respectively.

Setting up MySQL

Remember that what are we doing is to setup Vault so that we can ask it to create a temporary user and password to acces a MySQL instance. We’re not setting up MySQL as a storage backend to Vault.

We’ll need a user in MySQL able to create new users. Vault, will use this user to create (and drop, after the TTL) the temporary user it will bring us to connect to MySQL. To increase the security of our system, we want that only Vault could connect using this user. Let’s configure MySQL first.

Connect to MySQL using the root account. As we’re exporting the MySQL port, we can do this from our laptop:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
MacBook-Pro:TestVault vga$ mysql -uroot -pmypassword -h 127.0.0.1 -P 6603
mysql: [Warning] Using a password on the command line interface can be insecure.
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 3
Server version: 5.7.20 MySQL Community Server (GPL)

Copyright (c) 2000, 2016, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql> 

Let’s create a test user that can only connect from Vault server:

1
2
mysql> CREATE USER 'test'@'172.18.0.3';
Query OK, 0 rows affected (0.02 sec)

Set a password for that user:

1
2
mysql> set password for 'test'@'172.18.0.3' = 'test';
Query OK, 0 rows affected (0.00 sec)

And now, grant privileges to the user. As we want Vault to just create readonly users (in this example), just give it the create user and select privileges:

1
2
mysql> grant create user, select on *.* to 'test'@'172.18.0.3' with grant option;
Query OK, 0 rows affected (0.00 sec)

We’re done with MySQL. Let’s move to Vault again.

Setting up the database backend

To be able to create dynamic secrets, we need to setup the database backend. So, let’s login to Vault as an admin:

1
2
3
4
5
6
7
MacBook-Pro:TestVault vga$ vault auth -method=userpass username=admin password=abcd
Error making API request.

URL: PUT http://127.0.0.1:8200/v1/auth/userpass/login/admin
Code: 503. Errors:

* Vault is sealed

Ooops! As we discussed in the previous article, every time we restart Vault it becomes sealed. Unseal it and login as admin again. Now it’s time to mount the database backend. To do that, run the following command:

1
2
3
4
5
6
7
MacBook-Pro:TestVault vga$ vault mount -path=mysql1 database
Mount error: Error making API request.

URL: POST http://127.0.0.1:8200/v1/sys/mounts/mysql1
Code: 403. Errors:

* permission denied

Woooot! We don’t have permissions to mount the backend. We need to change the policy and rewrite it. Let’s change the adminpolicy.hcl file from the previous article to something like this:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
#authorization
path "auth/userpass/*" {
    capabilities = ["create", "read", "update", "delete", "list"]
}

#policies
path "sys/policy/*" {
    capabilities = ["create", "read", "update", "delete", "list"]
}

#mounts
path "/sys/mounts/*" {
    capabilities = ["create", "read", "update", "delete", "list"]
}

#mysql1
path "mysql1/*" {
    capabilities = ["create", "read", "update", "delete", "list"]
}

And rewrite it:

1
2
MacBook-Pro:TestVault vga$ vault write sys/policy/admins policy=@"adminpolicy.hcl"
Success! Data written to: sys/policy/admins

Cool, let’s try to mount the database backend:

1
2
MacBook-Pro:TestVault vga$ vault mount -path=mysql1 database
Successfully mounted 'database' at 'mysql1'!

Awesome, we’ve mounted the database backend in the path mysql1. Now it’s time to configure the role we’ll be using in this example. Configuring the role we are defining a couple of things: first, the name of the credentials we’re going to read in order to create the user. And second, the query Vault will use to create the user. Let’s run the following command:

1
2
MacBook-Pro:TestVault vga$ vault write mysql1/roles/readonly db_name=mysql creation_statements="CREATE USER '{{name}}'@'%' IDENTIFIED BY '{{password}}';GRANT SELECT ON *.* TO '{{name}}'@'%';" default_ttl="1h" max_ttl="24h"
Success! Data written to: mysql1/roles/readonly

And finally, we need to configure the connection to the MySQL where we want to create the users. Let’s do it:

1
2
3
4
5
MacBook-Pro:TestVault vga$ vault write mysql1/config/mysql plugin_name=mysql-database-plugin connection_url="test:test@tcp(172.18.0.2:3306)/" allowed_roles="readonly"


The following warnings were returned from the Vault server:
* Read access to this endpoint should be controlled via ACLs as it will return the connection details as is, including passwords, if any.

Cool, we’re ready to go. Let’s read the credentials for the readonly role in mysql1 and see what happens:

1
2
3
4
5
6
7
8
MacBook-Pro:TestVault vga$ vault read mysql1/creds/readonly
Key            	Value
---            	-----
lease_id       	mysql1/creds/readonly/977e3682-1f02-7165-43d3-919ba4512223
lease_duration 	1h0m0s
lease_renewable	true
password       	A1a-wq10q1qp3v1055up
username       	v-userpass-a-readonly-v5941vstst

Wow! It looks like that Vault has created a user and give us its username and password. Let’s try to login into MySQL with those credentials:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
MacBook-Pro:TestVault vga$ mysql -uv-userpass-a-readonly-v5941vstst -p"A1a-wq10q1qp3v1055up" -h 127.0.0.1 -P 6603
mysql: [Warning] Using a password on the command line interface can be insecure.
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 5
Server version: 5.7.20 MySQL Community Server (GPL)

Copyright (c) 2000, 2016, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql> 

Summary

Magic!!!! It worked! We can connect to the MySQL instance using the temporary user Vault has created for us. We no longer have to maintain users for different applications or usages, we just need to give our clients access to this endpoint. And thanks to the power of policies, we can give access to this endpoint only to the users we want. Also, the user used to create users, is only accessible via the Vault IP, so it’s quite secure.

Vault from HashiCorp is an amazing tool to manage the secrets on your organisation. It not only can help you to manage what they call static secrets that you can write and read, but also allows you to manage dynamic secrets to, for example, create temporary users in a MySQL database with certain permissions. It helps you to have a more secure organization.

Today we’re going to see how can we configure and use the basics of Vault.

Environment

We’re going to define a non-production ready environment for our tests. If you want to learn what you should do to productionise this environment, please follow their hardening guide.

Let’s start then. Assuming that you already have docker installed, let’s create a docker compose file to create the vault server. Create a folder in your laptop and create a file called docker-compose.yml with the following content:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
version: '3'
 
services:
  vault:
    image: vault
    container_name: vault.server
    ports:
      - 8200:8200
    volumes:
      - ./etc/vault.server/config:/mnt/vault/config
      - ./etc/vault.server/data:/mnt/vault/data
      - ./etc/vault.server/logs:/mnt/vault/logs
    restart: always
    cap_add:
      - IPC_LOCK
    environment:
      VAULT_ADDR: http://127.0.0.1:8200
    entrypoint: vault server -config="/mnt/vault/config/config.hcl"

This will bring up a new container named vault.server from the vault image. We’re defining the three volumes that vault will use, we’re exposing the address of the server in case we want to access it within the container, exposing the port vault is using and, finally, setting the entry point to the command use to start a vault server. As you can see, the command needs a configuration file that we don’t have yet. Let’s go to <base_folder>/etc/vault.server/config and create the config.hcl file with the following content:

1
2
3
4
5
6
7
8
storage "file" {
        path = "/mnt/vault/data"
}

listener "tcp" {
        address = "0.0.0.0:8200"
        tls_disable = 1
}

Using this file, we’re saying to Vault that we want a server opened at port 8200, that doesn’t use https, and that will use the file system as a storage. If you want, you can use other storages, being Consul the most popular one.

Now, we can bring up the container. Let’s go to your working folder and type docker-compose up

You should see the result of starting vault in your console. Container startup log

If you want to run the container in detached mode, type docker-compose up -d.

Now it’s time to configure Vault. By default, Vault creates a token (the defatul authentication mechanism) for root access. We can use this token to make the initial set up of Vault. But first, we need to unseal Vault.

Unsealing Vault

First of all, we need to initialise Vault. Initializing Vault means creating the root token we’ve already talked about and, much more important, creating the unsealing keys to, you can image, unseal Vault. to initialise Vault we use the command vault init. This will create 5 keys for us and we’ll need to provide three of them to unseal Vault. To change this configuration, use the parameters -key-shares and -key-threshold. Let’s then use 5 shares and 2 shares as threshold: vault init -key-shares=5 -key-threshold=2.

1
2
MacBook-Pro:TestVault vga$ vault init -key-shares=5 -key-threshold=2
Error initializing Vault: Put https://127.0.0.1:8200/v1/sys/init: http: server gave HTTP response to HTTPS client

Oops! We can’t connect to Vault! This is because, by default, the vault cli tries to connect using https to localhost and using port 8200. In our case, localhost and port 8200 are correct, but we’ve disabled TLS, so we need to connect via http. Let’s configure our terminal to connect to the right place: *export VAULT_ADDR=http://127.0.0.1:8200 *

Let’s try to run the init command again. Now, you should see something like this:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
MacBook-Pro:TestVault vga$ vault init -key-shares=5 -key-threshold=2
Unseal Key 1: JjPOZr3C27PkMUT+NMAANsE9LD/EMOxeg4LrazntoL29
Unseal Key 2: HEiabCxFrigrcoKBqKMD0SI2cIQqF8rIai1/7iMynQ2z
Unseal Key 3: GmjlrY/kYko2zwWMvG1Y5Ts3VZWEEg8dsqUx1Fab7R1f
Unseal Key 4: D1GSRtVE3vvw1Inbwiv5ohJK/nmJgAuIcISTQx7+N4za
Unseal Key 5: O4PkY3gMbHMOnSdMenyMyD21Au+zrv8VelihpDQPM+W6
Initial Root Token: f1682479-2a28-d577-c3de-521431116581

Vault initialized with 5 keys and a key threshold of 2. Please
securely distribute the above keys. When the vault is re-sealed,
restarted, or stopped, you must provide at least 2 of these keys
to unseal it again.

Vault does not store the master key. Without at least 2 keys,
your vault will remain permanently sealed.

As the message indicates, every time the server is re-sealed (using vault seal) or restarted we’ll need to provide two of these keys. Save them in a secure place, encrypt them and make an spell to protect them.

And now, finally, we can unseal the server. Type vault unseal. You will see

1
2
MacBook-Pro:TestVault vga$ vault unseal
Key (will be hidden): 

Paste there one of the keys and repeat the process as many times as the threshold you used in the initialisation phase.

When you have finally unsealed the server, you should see something like this:

1
2
3
4
5
6
7
MacBook-Pro:TestVault vga$ vault unseal
Key (will be hidden): 
Sealed: false
Key Shares: 5
Key Threshold: 2
Unseal Progress: 0
Unseal Nonce: 

Awesome. It’s time to configure Vault!

Configure an admin user

You can use different authentication backends in Vault, like GitHub, Okta or LDAP. In this example, we’re going to use username & password.

To make any action in this stage, we need to use the root token. Let’s grab it from the result of the init command and use it: vault auth <token>

1
2
3
4
5
MacBook-Pro:TestVault vga$ vault auth f1682479-2a28-d577-c3de-521431116581
Successfully authenticated! You are now logged in.
token: f1682479-2a28-d577-c3de-521431116581
token_duration: 0
token_policies: [root]

Now we need to enable the authentication backend: vault auth-enable userpass

1
2
MacBook-Pro:TestVault vga$ vault auth-enable userpass
Successfully enabled 'userpass' at 'userpass'!

Policies

Vault uses policies to grant and permit access to paths (everything in Vault is path based). So, the first thing we need to do is to create a policy for our new brand admin user. Create a file called adminpolicy.hcl with the following contents:

1
2
3
4
5
6
7
8
9
#authorization
path "auth/userpass/*" {
  capabilities = ["create", "read", "update", "delete", "list"]
}

#policies
path "sys/policy/*" {
  capabilities = ["create", "read", "update", "delete", "list"]
}

With this policy, a user will be able to manage users and policies, which is what we need by now. Add this policy using the following command:

1
2
MacBook-Pro:TestVault vga$ vault write sys/policy/admins policy=@"adminpolicy.hcl"
Success! Data written to: sys/policy/admins

As we already said, everything is a path in Vault, policies not being an exception. To write a new policy, we need to write in the path sys/policy/<name>. If we want, we can read the recently created policy:

1
2
3
4
5
6
7
8
9
10
11
12
13
MacBook-Pro:TestVault vga$ vault read sys/policy/admins
Key  	Value
---  	-----
name 	admins
rules	#authorization
	path "auth/userpass/*" {
	  capabilities = ["create", "read", "update", "delete", "list"]
	}

	#policies
	path "sys/policy/*" {
	  capabilities = ["create", "read", "update", "delete", "list"]
	}

It’s time to create a user linked to this policy.

Users

To create a user that uses this policy, we need to run the following command:

1
2
MacBook-Pro:TestVault vga$ vault write auth/userpass/users/admin password=abcd policies=admins
Success! Data written to: auth/userpass/users/admin

As always, we’re writing to a path.

Now we can use this new user to create other users. Let’s try that. First, we need to authenticate to Vault using this user

1
2
3
4
5
6
7
8
MacBook-Pro:TestVault vga$ vault auth -method=userpass username=admin password=abcd
Successfully authenticated! You are now logged in.
The token below is already saved in the session. You do not
need to "vault auth" again with the token.
token: b54fe4f4-060f-b68b-ea7c-ffddea53e7c2
token_duration: 2764800
token_policies: [admins default]
MacBook-Pro:TestVault vga$ 

Time to create a new user. We’d like to create a user with permissions to write secrets in his private space and secretes in a team space. Create a file called <username>.hcl (in my case username = vgaltes) with the following content (replace vgaltes with your username and team with your team name):

1
2
3
4
5
6
7
8
9
10
11
12
13
14
#user authentication
path "auth/userpass/users/vgaltes" {
  capabilities = ["update"]
}

#own secrets
path "secret/vgaltes/*" {
  capabilities = ["create", "read", "update", "delete", "list"]
}

#team secrets
path "secret/team/*" {
  capabilities = ["create", "read", "update", "delete", "list"]
}

With the first part of the policy, we’re allowing the user to change her password. With the second part, we’re allowing the user to manage his own space (well, a space named as his username), and with the third one we’re allowing the user to manage the team space.

Time to add the policy:

1
2
MacBook-Pro:TestVault vga$ vault write sys/policy/vgaltes-policy policy=@"vgaltespolicy.hcl"
Success! Data written to: sys/policy/vgaltes-policy

Now we can crate a user associated to this policy:

1
2
MacBook-Pro:TestVault vga$ vault write auth/userpass/users/vgaltes password=abcd policies=vgaltes-policy
Success! Data written to: auth/userpass/users/vgaltes

User created!! Let’s write and read some secrets.

Managing secrets

We have now created a new user. We’ve sent an email to him telling that his user has been created and that his password is abcd. The password is not very secure, so we urge him to change it as soon as possible. Let’s see what he can do.

First of all he need to authenticate into Vault. Easy peasy, we already know how to do that:

1
2
3
4
5
6
7
8
MacBook-Pro:TestVault vga$ vault auth -method=userpass username=vgaltes password=abcd
Successfully authenticated! You are now logged in.
The token below is already saved in the session. You do not
need to "vault auth" again with the token.
token: 2a6e67a9-9b3a-39a7-5b29-bc3cecf79151
token_duration: 2764800
token_policies: [default vgaltes-policy]
MacBook-Pro:TestVault vga$ 

Time to change the password. As always, we just need to write into a specific path:

1
vault write auth/userpass/users/vgaltes password=abcd1234

We can try now to authenticate using the old password. Hopefully it won’t work:

1
2
3
4
5
6
7
MacBook-Pro:TestVault vga$ vault auth -method=userpass username=vgaltes password=abcd
Error making API request.

URL: PUT http://127.0.0.1:8200/v1/auth/userpass/login/vgaltes
Code: 400. Errors:

* invalid username or password

Awesome! Let’s try with the new password. Now, we’re not going to provide the password, so the cli will ask for it:

1
2
3
4
5
6
7
8
MacBook-Pro:TestVault vga$ vault auth -method=userpass username=vgaltes
Password (will be hidden): 
Successfully authenticated! You are now logged in.
The token below is already saved in the session. You do not
need to "vault auth" again with the token.
token: 12de59c3-a23e-7dea-eff0-42dfeb8b73fd
token_duration: 2764799
token_policies: [default vgaltes-policy]

Now we’re ready to write our first secret. Let’s start with something simple:

1
2
MacBook-Pro:TestVault vga$ vault write secret/vgaltes/hello value=world
Success! Data written to: secret/vgaltes/hello

Let’s try to read it now:

1
2
3
4
5
MacBook-Pro:TestVault vga$ vault read secret/vgaltes/hello
Key             	Value
---             	-----
refresh_interval	768h0m0s
value           	world

Cool! We can do the same with the team space. Just change <username> for <teamname>.

Let’s try to write to someone else’s space:

1
2
3
4
5
6
7
MacBook-Pro:TestVault vga$ vault write secret/peter/hello value=world
Error writing data to secret/peter/hello: Error making API request.

URL: PUT http://127.0.0.1:8200/v1/secret/peter/hello
Code: 403. Errors:

* permission denied

As expected, we can’t write to that path.

Summary

We’ve set up a basic (but operational) Vault server. We know how to create policies and users, and how those users can log in into the system and create and read secrets. In the next article, we’ll see how we can use Vault to create temporary users in a MySQL database.

When you start working on a new project, there are a couple of things that you should try to discover as fast as you can: the shape of the code and the internals of the team you’re working with.

You can (and should) try to discover both of them using conversations (at the end, developing software is having conversations (link in spanish)). But it’s also useful to try to discover this things for yourself, to confirm what the conversations are saying, or to bo a little bit faster.

There are a lot of tools out there to analyse your code. A very useful one is a static analysis tool like NDepend. With a tool like NDepend you can discover a lot of issues on your code, and you can even analyse the evolution of it using different snapshots.

But sometimes, you need another type of information. Discovering temporal couplings and hotspots is something you can discover with tools like CrytalGazer or CodeScene(https://codescene.io). In this article we’ll see how we can combine both type of tools.

Hotspots

A hotspot is a file that has high chances to need a refactor. We’re going to use three metrics to evaluate this:

  • Number of commits performed that included the file
  • Complexity of the file
  • Number of people that contributed to the file

Let’s use CrystalGazer to gather this information. I’m going to use the ASP .Net MVC project to make this study. I’m doing this because Adam Tornhill has the same study using his tool Codescene and then I can compare the results :-).

Number of commits

Number of commits

As you can see, there are six files with more than 100 revisions. One of them is a file called ControllerActionInvokerTests.cs, with 106 revisions.

Complexity

Complexity

Here, we’re using a very simple metric to analyse complexity, as it is the number of lines. Number of lines is a good enough metric to perform this preliminary study and it’s easy an fast to calculate. You can use other metrics as number of tabs, cyclomatic complexity and so on.

In this case, our friend is the winner BY FAR.

Contributors

Contributors

And finally, we’re going to take a look at the number of contributors. In this case, our friend is in a very honorable sixth position (fourth if we exclude the resource files).

So, it looks like we have a good candidate to study. Let’s take a look what NDepend says about it.

NDepend

NDepend is an amazing tool with lots of options. In this case, I want to concentrate with the technical debt metric. According to NDepend, the project has a debt of 198.53 man-days and a rating of B, needing 90 man-days to reach a rating of A. So, I want to take a look at what are the worst classes to start working on them. NDepend will analyse my project in a very different way that we did before. It’s going to take a look at the internals of my code in the moment I run the analysis. So, let’s see what it says when I run the “Types Hot Spot” analysis of the debt.

Types Hot Spot analysis

Interestingly, our dear friend is also in a very “good” position in that list.

Summary

In this article we’ve taken an initial look at how to detect hotspots in our code with two very different tools: an static analysis tool like NDepend, and a repository analysis tool like CrystalGazer. We’ve seen that, in our case, the results are quite similar. This is something that shouldn’t necessarily happen.

We have different tools to analyse the state of our code. Let’s try to take advantage of all of them.

In the last few weeks I’ve been working on Crystal Gazer, a nodejs console application to gather information from your Git repository. You can find it on GitHub and on NPM.

One of the things I’d like to do is to track the evolution of a function. Has it been modified a lot? How many people has been working on it? Is the function too long?

To answer the first question, what we could do is rely on the git log -L:function:file command to give us all the changes a function has suffered. We need a couple of things to run this command: the name of the file and the name of the function.

The name of the file is the easy part. We have the names of the files in the original git log (see the README.md of Crytal Gazer to know how it works) and we can always ask the user to input it. But given a file, we’d like to list the different functions with their statistics. So we need an automated way to extract the name of the functions from a source code file (we’re going to work with .cs files).

First attempt

My first approximation was to try to code something by myself. A combination of Regex, a couple of ifs, maybe some split… That didn’t worked well. The number of cases to take into account is big enough to be very difficult to write such function.

ANTLR

Then, one acronym came to my mind: AST. An Abstract Syntax Tree is a tree representation of the abstract syntactic structure of source code written in a programming language (wikipedia). This is what, for example, escomplex uses to perform metrics. Can we create the AST of C# code in JavaScript?

The answer is ANTLR. ANTLR is a parser generator that can create an AST from a file using a grammar that we can define. Fortunately, we don’t need to create the C# grammar by ourselves, we can grab it from here.

These are the steps I followed to be able to get the function names from a C# file.

Create the lexer, parser and the listener from the grammar.

In a VERY short way, antlr uses a lexer to create tokens from your input, then uses these tokens to initialize the parser that creates the AST. When you traverse the tree, you get notified in the listener about the different things it finds.

To create these three files, we’ll need to use the antlr tool. So, let’s download the tool and make it accessible. Follow these instructions just changing 4.5.3 to 4.7.

Now we can use the tool to generate the lexer. Download CSharpLexer.g4 from the grammars repository and copy it to your project folder. Run the following command:

1
antlr4 -Dlanguage=JavaScript CSharpLexer.g4

This will generate CSharpLexer.js and CSharpLexer.tokens.

We need to do the same for CSharpParser.g4. So run the following command:

1
antlr4 -Dlanguage=JavaScript CSharpParser.g4

In this case this command will generate CSharpParser.js, CSharpParser.tokens and CSharpParserListener.js.

Prepare a nodejs project

We’re going to create a nodejs project to use all this stuff. Let’s add antlr4 to it:

1
npm install antlr4 --save

Create a file called index.js and import the neccessary modules:

1
2
3
4
const antlr4 = require('antlr4/index');
const fs = require('fs');
const CSharpParser = require('./CSharpParser.js');
const CSharpLexer = require('./CSharpLexer.js');

If you now run this script you’ll get errors loading the CSharpLexer module. That’s because the file generated has some invalid identifiers:

  • In the line 5 there’s a
    1
    
    import java.util.Stack;
    which is obviously invalid in JavaScript. Just comment out (or delete) the line.
  • In the lines 1770 to 1773 (both included) there is some C# code. Just change each variable initialisation by
    1
    
    var <variable_name>
    .
  • Between the lines 1866 and 1883 (both included) there is more C# code. Comment out or delete it.

Creating the listener

With the help of ANTLR we’ve created a CSHarpListener. We can use that listener as a base class for more specific listeners. As we want to get the names of the different methods of the file, let’s create a listener for that.

Create a file called CSharpFunctionListener.js and copy the following code:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
const antlr4 = require('antlr4/index');
const CSharpLexer = require('./CSharpLexer');
const CSharpParser = require('./CSharpParser');
var CSharpListener = require('./CSharpParserListener').CSharpParserListener;

CSharpFunctionListener = function(res) {
    this.Res = res;    
    CSharpListener.call(this); // inherit default listener
    return this;
};
 
// inherit default listener
CSharpFunctionListener.prototype = Object.create(CSharpListener.prototype);
CSharpFunctionListener.prototype.constructor = CSharpFunctionListener;


CSharpFunctionListener.prototype.enterMethod_member_name = function(ctx){
    this.Res.push(ctx.getText());
}

exports.CSharpFunctionListener = CSharpFunctionListener;

Nothing too special here. The important part is that we’re overriding the enterMethod_member_name method, which is the method that will be called when the parser finds a method name.

Putting all together

Time to use all this stuff. Go back to your index.js file and add the following code:

1
2
3
4
5
6
7
8
9
10
11
12
13
var input = fs.readFileSync('aFile.cs').toString();

var chars = new antlr4.InputStream(input);
var lexer = new CSharpLexer.CSharpLexer(chars);
var tokens  = new antlr4.CommonTokenStream(lexer);
var parser = new CSharpParser.CSharpParser(tokens);

var tree = parser.namespace_member_declarations();   
var res = [];
var csharpClass = new CSharpFunctionListener(res);
antlr4.tree.ParseTreeWalker.DEFAULT.walk(csharpClass, tree);

console.log("Function names: ", res.join(','));

As you can see, we’re creating the lexer and the parser. Then, we use antlr to traverse the tree and pass the listener in order to be notified.

If you run it, you will get the names of the functions.

Summary

This is the first thing I do with antlr and it’s been really pleasant. The only problem was removing the invalid code from the lexer and that’s all. Expect a new version of Crystal Gazer using some antlr magic soon!!