Bare Metal Cloud Deployment: Easy and Fast

Anchit Arnav
4 min readSep 24, 2021

When to use this

You have developed a basic application that runs on your local, the next step you want to take is to deploy it on some cloud to use it. There are many ways to do it, but you especially want to ensure:

  • No Kubernetes or serverless knowledge is required.
  • No PaaS knowledge is required (e.g. Google App Engine or AWS BeanStalk).
  • Fast to implement and just does the work done, without a lot of hassles.

Then, bare metal deployment is for you.

Many people discourage this classic method of deployment, and there are good reasons to do that, but we can not deny that in a lot of use cases, you want the job to be done more than the hassles that come with it and that delay it forever.

For some reason, if this is not the case, then I strongly suggest you have a look at other solutions like Kubernetes, Serverless, PaaS, etc.

Rough Architecture

Disign Diagram: Baremetal Deployment of application behind a Load balancer or Application Gateway

Overall Steps:

  1. Create a deployable: Package your application as a deployable package. This depends on the programming language used. e.g. in Java, you can make a JAR file and save it as an artefact on an artefact server, in python, you can make a pip installable package, in CPP you can make an exe, and so on.
  2. Create an installation script: Write a script (preferably in shell), which when run on a fresh VM instance shall configure your application on it, and make it ready to go. e.g. you can write a script that installs your required software and dependencies, then copies your code from some version control, and then starts your application.
  3. Setup an LB or Application Gw: The next thing to do now is to get a central point of access for your application. The Application Gateway or the LoadBalancer should be the only way to access your application, and traffic should not hit your VMs directly. Instead, your VMs should be the backend to target for your Load Balancer or Application Gateway.
  4. (Optional) Get a DNS: While not necessary for internal or POC projects, but it's always preferred to get a DNS address that can resolve to the address of your application gateway, or your load balancer.
  5. (Optional) Setup TLS/SSL: Again, an optional but preferred step would be to get a CA-signed certificate for your service using the address of the LB. This increases security many folds. We will discuss this below.
  6. (Optional) Set up a firewall for your LB/App Gw: This step is useless if you are creating a truly public application. But suppose it's just for people in your organisation, it's best if you can keep a firewall that will allow traffic only originating your organization’s public IPs. Remember we must apply this traffic on the LB or App Gw and not on the VMs.
  7. (Optional) Setup Authentication the LB or App Gw: The final step is to set up authentication on your LB. Generally cloud providers provide this out of the box. e.g. On AWS, you have AWS Cognito or IAM based auth, on GCP you have IAP (Identity Aware Proxy).

Good things about this approach

The best thing about this approach is that it is very easy to implement. It does not require you to have any prior Kubernetes or serverless related knowledge, and that it is easy to debug as well. If you have a quick POC project for your organization, or just want to deploy some simple tool that you created, to share with others, this is the way to go.

Security Considerations

While fast and easy is the basic manta here, we should take care of a few things to ensure that we are doing things with the correct security in mind and that we are not caught off-guard, just because we were in a hurry.

  1. Block direct access to VMs: Implement a firewall on your VM or set of VMs to allow traffic from your LoadBalancer only, and only on the port that your application runs on.
  2. Prevent direct SSH: Especially ensure that your firewalls prevent direct SSH to your VMs.
  3. Prefer an Application Gateway if your application supports it, and use an L3 load balancer only if required. This restricts a lot of things and limits the attack vector to HTTP(s) vulnerabilities only.
  4. TLS/SSL: If you are planning to transfer any sensitive information over the connection, either in requests from your users or as a reply from your application, do consider setting up TLS/SSL and using HTTPS instead of HTTP. Even Self Signed certificates are okay than not having one at all if your application is internal.
  5. Prefer a cloud-based auth service instead of writing custom auth. Going for something like an Amazon Cognito, GCP Firebase auth or IAP, is always preferred over writing your own authentication logic. This removes all the hassles of storing secret user information, and even prevents your application from DDOS attacks, as your provider now handles all random auth requests and not your application.

Summary

A bare-metal deployment is for quick implementation for situations where you do not need much of a hassle and doing things fast is the key, and maybe not the best way.

--

--