Retrospective of 2021. Plans for 2022.

Well, it’s almost the end of the year and as it has already become a tradition – it’s a good time to look back, analyze this year’s achievements, and create a plan for the next year.

Last year’s retrospective article – Retrospective of 2020. Plans for 2021.

2021’s plans progress

  • Finish the Leading People and Teams Specialization – 100% (cert link)
  • Receive Microsoft Certified: Azure Solutions Architect Expert certificate
  • Run at least 100 runs with a minimum of 500km distance and take part in a race – 100%. Due to Strava, I’ve done 100 runs with a total distance of 1040 km.
  • Solve 30 HackerRank problems – 1/30
  • Learn 1 additional programming language – 100% YAML (due to google it is still programming language xD). I improved my skills in that, because of different DevOps projects.
  • Post at least 12 new posts in the blog – 4/12
  • Contribute to at least one open-source project on Github
  • Receive a Professional Scrum Developer certificate
  • Get English certificate (IELTS minimum 6.0 or similar) – postponed for 2023/4 due to changed plans and fact that certificate expires 365 days after the test.
  • Visit at least 2 new countries – 150% (visited 3 new countries I’ve never been to before – Lithuania, Latvia, Estonia)
  • Hike to Gerlach peak
  • Sleep at least 1 night in a tent – 300% (as 3 nights have been spent in a tent ?)
  • Donate at least 1.5 liter of blood (3 times) – 100% (1.8 liter – 4 times)
  • Complete First Aid Course
  • Meaningful work-related change – 100% (changed workplace to a place which gives the satisfaction of solving every day’s customer problems)
  • Visit a technical conference or meetup (min 2)
  • Start investing money – 100% (opened a few retirement accounts as well as investing in some stock for shorter-term)
  • Sleep 8+ hours – 85% (do my best to get to bed at 10 PM and wake up at 7 AM at least 5 days a week)
  • Do some crazy adrenaline rush thing (paraglide, glider) – 100% – finished Vienna City Marathon 2021. Nevertheless, the glider is waiting for me in 2022 ;).
  • Do some gym (at least 1 per week or 52 in a year) – 5/52
  • Do skying (3 times+) – 100% (2/3)
  • Add at least 2 new board games to the home collection – 150% (+ 3 new board games in our collection)

Summarise:

  • 51% (12/23) – Fully completed targets
  • 14% (3/23) – Progressed well, but not completed
  • 35% (8/23) – Have not been started or progress is negligible

“Out of targets achievements of 2021”

Except for the main plan with the points above, there are a few things I’ve managed to do and am happy to share with you with:

  1. I’ve run full marathon distance (at Vienna City Marathon 2021 annual race). My target time was 3:50, but reality made my own correctives and after 36’s kilometers I’ve got a painful calf cramp, so last 6 kilometers I most walked rather than ran. The final time was 4:21. Even though, I’m really proud of such achievement, because not the race itself, but rather 3 months of preparation prior to the race day was the most difficult part of a marathon run.
  2. I’ve passed a test for a motorcycle driver’s license and bought my first bike. As it all happened close to end the of the riding season (in Poland it usually starts in March and ends in November), I hope to fully enjoy that since spring of 2022 ?

Even though 51% is far away from 100%, at the last year’s post I wrote

So achieving >= 50% of goals might be a reasonable approach.

which means “baseline” has been met. Nevertheless, as we all strive to be better in the “New Year”, I decided to raise the baseline to 60% for the next year. This decision affected my approach of creating a to-do list for next year from “let’s put everything see what happen” to creating more precise targets divided by category.

Plans for 2022

Few unfinished, as well as “permanent” targets, have been moved from last, in addition to that, there are few new points:

Health

  1. Do 48 gym sessions (~1 per week)?️‍♂️
  2. Run at least 100 runs with a minimum of 600km distance and take part in a race ?‍♂️

Professional & Personal Development

  1. Receive AWS Certified Solutions Architect certificate ?
  2. Receive a Professional Scrum Product Owner certificate ?
  3. Solve 48 HackerRank problems (~ 1 per week) ?‍?
  4. Read at least 8 books ?
  5. Learn at least 1 new programming language or fundamental framework ?
  6. Have at least 100 “active days” in my GitHub account ?

Leisure

  1. Hike to Gerlach peak ⛰
  2. Do glider flight ✈️
  3. Visit 1 new country ?
  4. Complete First Aid Course ⛑
  5. Add at 1 new board game to the home collection ?

Miscellaneous

  1. Post at least 12 new posts in the blog (~ 1 per month) ?
  2. Donate at least 1.5 liters of blood (3 times) ?

Final thoughts

I believe that at the beginning of 2021 all of us had huge expectations for this year. Most of them were related to “back to normality” and the end of the pandemic life mode. As we know now, it was not the case, but today, undoubtedly, we are more informed and prepared for potential pandemic dangers than in the previous year. In the New 2022 year, we need to keep our hope back to normal life, but except hoping, we have to do our best to reach that target. Despite COVID-19, life is moving forward, let’s don’t just focus on bad news, but support each other, focus on our achievements and do regular workouts!

Happy and “Normal” New 2022 Year!

P.S. There are a few highlight photos from 2021. Unfortunately, not every occasion has a photo in place, but at least a few of them could share the mood of my 2021.

How to prevent deletion of your AWS RDS backups.

Do you have a backup of your database? Is it stored in safe place? Do you have a plan B if someone will delete your database and backup by accident or intentionally?

This is the list of questions I faced after recent security breach. Let’s imagine a situation when we have a database in cloud and someone accidentally removed that database and all backups/snapshot. Sounds like impossible disaster right? Well, even if it is pretty hard to do accidentally, that doesn’t mean it’s impossible. Or if someone “from outside” will get an access to your cloud prod environment and intentionally remove your database, oh, that sounds more realistic. Well, all we know that due to Moore’s law that question is rather “When?” and not “What if?”.

Regardless of scenario, it is always good plan to have a plan B? So what we can do with that – probably the most obvious scenario is to save the database snapshots in a different place than your database lives. Yes, I think it is a good option. Nevertheless, it is always good to remember that every additional environment, framework or technology will require additional engineering time to support.

Another way is that we can to restrict anyone (regardless of permissions) from removing database snapshots. So even if someone intentionally will want to remove database, it is always a possibility of (relatively) quick restore. Fortunately Amazon S3 bucket provides a possibility to set object lock for S3 bucket. There are 2 retention modes for object lock in AWS S3 service (AWS doc source):

  • Governance mode
  • Compliance mode

Governance allow to delete objects if you have a special permissions and Compliance mode does not allow to remove this object at all (even for root user) for particular period of time.

In addition to that, recently AWS introduced possibility to export RDS database snapshots to S3 bucket (AWS doc source). From AWS Console we can easily do that by clicking to Export to Amazon S3 button:

AWS Console Export to S3 Bucket
AWS Console – Export to S3 bucket option

So we can combine Exporting Snapshots and Non-Deletable files to protect database snapshots from deleting.

The last thing, is that even if it is pretty simple to do manually through AWS Console, it is always better to have such important process automated, so our database will be safely stored even when we are chilling on the beach during our long waited summer holidays ? . For doing that, we can subscribe to RDS Snapshot creating events and via lambda execution initiate exporting newly created snapshot to non-deletable S3 bucket.

The architecture of that will look like that:

Bellow you could find serverless framework file for creating that infrastructure:

service:
 name: rds-s3-exporter

plugins:
 - serverless-pseudo-parameters
 - serverless-plugin-lambda-dead-letter
 - serverless-prune-plugin

provider:
 name: aws
 runtime: nodejs14.x
 timeout: 30
 stage: dev${env:USER, env:USERNAME}
 region: eu-west-1
 deploymentBucket:
   name: ${opt:stage, self:custom.default-vars-stage}-deployments
 iamRoleStatements:
   - Effect: Allow
     Action:
       - "KMS:GenerateDataKey*"
       - "KMS:ReEncrypt*"
       - "KMS:GenerateDataKey*"
       - "KMS:DescribeKey"
       - "KMS:Encrypt"
       - "KMS:CreateGrant"
       - "KMS:ListGrants"
       - "KMS:RevokeGrant"
     Resource: "*"
   - Effect: Allow
     Action:
       - "IAM:Passrole"
       - "IAM:GetRole"
     Resource:
       - { Fn::GetAtt: [ snapshotExportTaskRole, Arn ] }
   - Effect: Allow
     Action:
       - ssm:GetParameters
     Resource: "*"
   - Effect: Allow
     Action:
       - sqs:SendMessage
       - sqs:ReceiveMessage
       - sqs:DeleteMessage
       - sqs:GetQueueUrl
     Resource:
       - { Fn::GetAtt: [ rdsS3ExporterQueue, Arn ] }
       - { Fn::GetAtt: [ rdsS3ExporterFailedQ, Arn ] }
   - Effect: Allow
     Action:
       - lambda:InvokeFunction
     Resource:
       - "arn:aws:lambda:#{AWS::Region}:#{AWS::AccountId}:function:${opt:stage}-rds-s3-exporter"
   - Effect: Allow
     Action:
       - rds:DescribeDBClusterSnapshots
       - rds:DescribeDBClusters
       - rds:DescribeDBInstances
       - rds:DescribeDBSnapshots
       - rds:DescribeExportTasks
       - rds:StartExportTask
     Resource: "*"

 environment:
   CONFIG_STAGE: ${self:custom.vars.configStage}
   REDEPLOY: "true"
custom:
 stage: ${opt:stage, self:provider.stage}
 region: ${opt:region, self:provider.region}
 default-vars-stage: ppe
 vars: ${file(./vars.yml):${opt:stage, self:custom.default-vars-stage}}
 version: ${env:BUILD_VERSION, file(package.json):version}
 rdsS3ExporterQ: ${self:custom.stage}-rds-s3-exporter
 rdsS3ExporterFailedQ: ${self:custom.stage}-rds-s3-exporterFailedQ
 databaseSnapshotCreatedTopic: ${self:custom.stage}-database-snapshotCreated
 rdsS3ExporterBucket: "${self:custom.stage}-database-snapshot-backups"

functions:
 backup:
   handler: dist/functions/backup.main
   reservedConcurrency: ${self:custom.vars.lambdaReservedConcurrency.backup}
   timeout: 55
   events:
     - sqs:
         arn: "arn:aws:sqs:#{AWS::Region}:#{AWS::AccountId}:${self:custom.rdsS3ExporterQ}"
         batchSize: 1
   environment:
     CONFIG_STAGE: ${self:custom.vars.configStage}
     DATABASE_BACKUPS_BUCKET: ${self:custom.rdsS3ExporterBucket}
     IAM_ROLE: "arn:aws:iam::#{AWS::AccountId}:role/${opt:stage}-rds-s3-exporter-role"
     KMS_KEY_ID: alias/lambda
     REGION: "eu-west-1"

resources:
 Description: Lambda to handle upload database backups to S3 bucket
 Resources:
   rdsS3ExporterQueue:
     Type: AWS::SQS::Queue
     Properties:
       QueueName: "${self:custom.rdsS3ExporterQ}"
       MessageRetentionPeriod: 1209600 # 14 days
       RedrivePolicy:
         deadLetterTargetArn:
           Fn::GetAtt: [ rdsS3ExporterFailedQ, Arn ]
         maxReceiveCount: 5
       VisibilityTimeout: 60
   rdsS3ExporterFailedQ:
     Type: "AWS::SQS::Queue"
     Properties:
       QueueName: "${self:custom.rdsS3ExporterFailedQ}"
       MessageRetentionPeriod: 1209600 # 14 days
   databaseSnapshotCreatedTopic:
     Type: AWS::SNS::Topic
     Properties:
       TopicName: ${self:custom.databaseSnapshotCreatedTopic}
   snapshotCreatedTopicQueueSubscription:
     Type: "AWS::SNS::Subscription"
     Properties:
       TopicArn: arn:aws:sns:#{AWS::Region}:#{AWS::AccountId}:${self:custom.databaseSnapshotCreatedTopic}
       Endpoint:
         Fn::GetAtt: [ rdsS3ExporterQueue, Arn ]
       Protocol: sqs
       RawMessageDelivery: true
     DependsOn:
       - rdsS3ExporterQueue
       - databaseSnapshotCreatedTopic

   snapshotCreatedRdsTopicSubscription:
     Type: "AWS::RDS::EventSubscription"
     Properties:
       Enabled: true
       EventCategories : [ "creation"]
       SnsTopicArn : arn:aws:sns:#{AWS::Region}:#{AWS::AccountId}:${self:custom.databaseSnapshotCreatedTopic}
       SourceType : "db-snapshot"
     DependsOn:
       - databaseSnapshotCreatedTopic

   rdsS3ExporterQueuePolicy:
     Type: AWS::SQS::QueuePolicy
     Properties:
       Queues:
         - Ref: rdsS3ExporterQueue
       PolicyDocument:
         Version: "2012-10-17"
         Statement:
           - Effect: Allow
             Principal: "*"
             Action: [ "sqs:SendMessage" ]
             Resource:
               Fn::GetAtt: [ rdsS3ExporterQueue, Arn ]
             Condition:
               ArnEquals:
                 aws:SourceArn: arn:aws:sns:#{AWS::Region}:#{AWS::AccountId}:${self:custom.databaseSnapshotCreatedTopic}
  
   rdsS3ExporterBucket:
     Type: AWS::S3::Bucket
     DeletionPolicy: Retain
     Properties:
       BucketName: ${self:custom.rdsS3ExporterBucket}
       AccessControl: Private
       VersioningConfiguration:
         Status: Enabled
       ObjectLockEnabled: true
       ObjectLockConfiguration:
         ObjectLockEnabled: Enabled
         Rule:
           DefaultRetention:
             Mode: COMPLIANCE
             Days: "${self:custom.vars.objectLockRetentionPeriod}"
       LifecycleConfiguration:
         Rules:
           - Id: DeleteObjectAfter31Days
             Status: Enabled
             ExpirationInDays: ${self:custom.vars.expireInDays}
       PublicAccessBlockConfiguration:
         BlockPublicAcls: true
         BlockPublicPolicy: true
         IgnorePublicAcls: true
         RestrictPublicBuckets: true
       BucketEncryption:
         ServerSideEncryptionConfiguration:
           - ServerSideEncryptionByDefault:
               SSEAlgorithm: AES256

   snapshotExportTaskRole:
     Type: AWS::IAM::Role
     Properties:
       RoleName: ${opt:stage}-rds-s3-exporter-role
       Path: /
       AssumeRolePolicyDocument:
         Version: '2012-10-17'
         Statement:
           - Effect: Allow
             Principal:
               Service:
                 - rds-export.aws.internal
                 - export.rds.amazonaws.com
             Action:
               - "sts:AssumeRole"
       Policies:
           - PolicyName: ${opt:stage}-rds-s3-exporter-policy
             PolicyDocument:
               Version: '2012-10-17'
               Statement:
                 - Effect: Allow
                   Action:
                     - "s3:PutObject*"
                     - "s3:ListBucket"
                     - "s3:GetObject*"
                     - "s3:DeleteObject*"
                     - "s3:GetBucketLocation"
                   Resource:
                     - "arn:aws:s3:::${self:custom.rdsS3ExporterBucket}"
                     - "arn:aws:s3:::${self:custom.rdsS3ExporterBucket}/*"

package:
 include:
   - dist/**
   - package.json
 exclude:
   - "*"
   - .?*/**
   - src/**
   - test/**
   - docs/**
   - infrastructure/**
   - postman/**
   - offline/**
   - node_modules/.bin/**

Now it’s time to add lambda handler code:

import { Handler } from "aws-lambda";
import { v4 as uuidv4 } from 'uuid';
import middy = require("middy");
import * as AWS from "aws-sdk";

export const processEvent: Handler<any, void> = async (request: any) => {
	console.info("rds-s3-exporter.started");

	const record = request.Records[0];
	const event = JSON.parse(record.body);

	const rds = new AWS.RDS({region: process.env.REGION});
	await rds.startExportTask({
		ExportTaskIdentifier: `database-backup-${uuidv4()}`,
		SourceArn: event["Source ARN"],
		S3BucketName: process.env.DATABASE_BACKUPS_BUCKET || "",
		IamRoleArn: process.env.IAM_ROLE || "",
		KmsKeyId: process.env.KMS_KEY_ID || ""
	}).promise().then(data =>{
		console.info("rds-s3-exporter.status", data);
		console.info("rds-s3-exporter.success");
	})
};

export const main = middy(processEvent)
	.use({
		onError: (context, callback) => {
			console.error("rds-s3-exporter.error", context.error);
			callback(context.error);
		}
	})

As there is no built-in possibility to create a flexible filter of databases we want to export, it is always possible to add some custom filtering in the lambda execution itself. You could find example of such logic as well as all codebase in GitHub repository.

Right before publication this post I’ve found that AWS actually recently (Oct 2021) has implemented AWS Backup Vault Lock which does the same thing out of the box. You could read more about that at AWS Doc website. Nevertheless, at the time of publication this post, AWS Backup Vault Lock has not been certified by third-party organisations SEC 17a-4(f) and CFTC.