VPC Security! it’s a must!

In this blog post we are going to take a look into Security Groups and Network Access Control List (NACL) in AWS. 

Also to understand what is the difference between them and how can we used them to increase our security in the cloud.

Security Groups

Before we going to talk on security group it’s important that you know how it’s looked like in AWS.

AWS Security groups

So what are security groups?!

  1. Control how traffic is allowed into or out of your EC2 Machine.
  2. Security groups are stateful (Return traffic is automatically allowed)
    • if you send a request from your instance, the response traffic for that request is allowed to flow in regardless of inbound security group rules.
  3. Can be attached to multiple instances
  4. Lockdown to a Region/VPC
  5. All inbound traffic is blocked by default
  6. All outbound traffic is authorised by default
  7. You can specify allow rules, but not deny rules.

You can also reference another security group instead of IP

Let’s take an example of that

 In this example we can see that EC2-1 and EC2-2 are allowed to send traffic to EC2-3.

Because EC2-3 have security group (named SG-200) with inbound rule that allowing access to any machine that have a security group (named SG-100) assigned to her.

Inbound EC2-3

SourceProtocolPort rangeDescription
The security group ID (sg-100)AllAllAllow inbound traffic from network interfaces (and their associated instances) that are assigned to the same security group.
Security group example

Now lets talked about Network Access Control List (NACL).


Key Notes about NACL

  • Control traffic between different subnets in the same VPC
  • Stateless – We need to explicitly  open outbound traffic
  • Works at Subnet level – automatically applied to all instance
  • Contains both Allow and Deny rules
  • Rules are evaluated in the order of rule number 
  • Default NACL allows all inbound and outbound traffic
  • NACL are a great way of blocking a specific IP at the subnet level 

Rule #TypeProtocolPort rangeSourceAllow/Deny
100All IPv4 trafficAllAll0.0.0.0/0ALLOW
*All IPv4 trafficAllAll0.0.0.0/0DENY

Rule #TypeProtocolPort rangeDestinationAllow/Deny
100All IPv4 trafficAllAll0.0.0.0/0ALLOW
*All IPv4 trafficAllAll0.0.0.0/0DENY
NACL Example

Most important – by default subnets in the same VPC can communicate without any restrictions that is because NACL by default permits traffic inside the VPC.

It’s always recommended to use NACL to limit access between subnets.

Compare security groups and network ACLs

The following table summarizes the basic differences between security groups and network ACLs.

Security groupNetwork ACL
Operates at the instance levelOperates at the subnet level
Supports allow rules onlySupports allow rules and deny rules
Is stateful: Return traffic is automatically allowed, regardless of any rulesIs stateless: Return traffic must be explicitly allowed by rules
We evaluate all rules before deciding whether to allow trafficWe process rules in order, starting with the lowest numbered rule, when deciding whether to allow traffic
Applies to an instance only if someone specifies the security group when launching the instance, or associates the security group with the instance later onAutomatically applies to all instances in the subnets that it’s associated with (therefore, it provides an additional layer of defense if the security group rules are too permissive)

SSH to your Linux Server using Google Authenticator app

In this section, we will learn how to secure your ssh connection with MFA using Google authenticator app.

Before we start go ahead and download the google authenticator app to your mobile device.

After you successfully downloaded and install the google authenticator app on your mobile device go to your Linux server and install the google-authenticator PAM module by typing this command:

swarm@swarm3:~$ sudo apt install libpam-google-authenticator
Reading package lists... Done
Building dependency tree       
Reading state information... Done
libpam-google-authenticator is already the newest version (20191231-2).
0 upgraded, 0 newly installed, 0 to remove and 75 not upgraded

After the installation complete type following command:

swarm@swarm3:~$ google-authenticator

  • Follow the instructions and scan the bar-code by your google authenticator mobile app
  • You can type yes for every question that you encounter during the process of setting up the Google authentication app
  • After completion save the emergency scratch codes in a secure location, you will need it in case you lose your phone
  • You can do this process for every user on your Linux server.

Now we will need to enable “ChallengeResponseAuthentication” in the ssh config file.

swarm@swarm3:~$ sudo vi /etc/ssh/sshd_config

# Change to yes to enable challenge-response passwords (beware issues with
# some PAM modules and threads)
ChallengeResponseAuthentication yes

Don’t forget to save the file by pressing :qw

Restart the SSH service:

swarm@swarm3:~$ sudo systemctl restart ssh

The final step is to add the google authentication module to the PAM ssh config file:

swarm@swarm3:~$ sudo vi /etc/pam.d/sshd

Add this line to the end of the config file and save the file:

auth required pam_google_authenticator.so

That’s it, now you can ssh to your server using google authentication

Verfication code: Enter the code that presented in your google authenticator app in your mobile device.

SSH Key-Pairs.

Remotely Connect to Linux Servers with SSH key-pairs

SSH: Authentication with Key-pairs

On your client machine:

  • Create ssh key pair by using the command ssh-keygen
    • It will create 2 files (Private key and Public key) in the .ssh folder.
[menit@fedora .ssh]$ ls 
id_rsa id_rsa.pub
  • It is recommended that you will use a passphrase to encrypt your private key
[menit@fedora .ssh]$ ssh-keygen 
Generating public/private rsa key pair.
Enter file in which to save the key (/home/menit/.ssh/id_rsa): 
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /home/menit/.ssh/id_rsa
Your public key has been saved in /home/menit/.ssh/id_rsa.pub
The key fingerprint is:
SHA256:E+n8J9Sjbdbi5A7uyu7LVAm2Y8fNBtSawvCyoR7l3Y4 menit@fedora
The key's randomart image is:
+---[RSA 3072]----+
|          ..     |
|      .  o  .    |
|       += .o     |
|      +++=o*     |
|     + =So* *    |
|    o o..B.+ o   |
|   . .  .o= B .  |
|    .  +E..X .   |
|       oB+o.+    |

How to Deploy your public key to your manage servers.

  • To connect to your Linux servers using ssh keys you will need to transfer the public key to your remote servers

There are 2 methods to transfer the public key to your server

The first method is to install the public key from your own host to your remote server using this command:

  • This command will create on the remote host .ssh folder and a file named authorized_keys and he will copy-paste the public key to this file.
ssh-copy-id -i /home/menit/.ssh/id_rsa.pub username@

  • The second method is to copy your public key and paste it to your remote server under the .ssh folder to file named authorized_keys (if you can’t find such file you just need to create it.

Now you can connect to your machine using this command

[menit@fedora .ssh]$ ssh swarm@

Connect to your remote server without the passphrase

To avoid the need to enter a passphrase every time you ssh to a remote host you can use sshagent to Cache your Authentication Credentials into the host memory.

[menit@fedora .ssh]$ ssh-agent bash
[menit@fedora .ssh]$ ssh-add id_rsa
Enter passphrase for id_rsa: ***********
Identity added: id_rsa (menit@fedora)

How to ssh to a remote host using the Root User account.

  1. On the remote host, you will need first to enable the login as root option: To enable it to remove # from the line “PermitRootLogin prohibit-password”
swarm@swarm3:/etc/ssh$ vim /etc/ssh/sshd_config

#LoginGraceTime 2m
PermitRootLogin prohibit-password
#StrictModes yes
#MaxAuthTries 6
#MaxSessions 10

Exit and Save the file by pressing :wq

  1. Switch to you root account in the remote server and pass the Public ssh key to the authorized_keys file under the .ssh folder.
root@swarm3:~/.ssh# ls

How to type sudo command with a password

To grant you user sudo permissions you will need to edit this config file:

[menit@fedora .ssh]$ sudo visudo

Under Allow people in group wheel paste this command

#Allow users to run all commands

This is how it’s should be looked like in the config file:

## Allows people in group wheel to run all commands
%wheel  ALL=(ALL)       ALL

#Allow users to run all commands

It’s important you enter your new line entry at the bottom of the config file because the visudo file is processed from the top to bottom.

Creating and Using Docker Containers

In this post i will give you the basic commands to run and troubleshoot basic docker containers

Install Docker First

This is a simple command to run a Nginx docker container:

docker container run --publish 844:80 --detach --name Mynginx nginx

  • — Detach = Run container in the background
  • — name = name of the container
  • nginx = image to run

What happens in ‘Docker container run’?

  • Looks for that image locally in the image cache,If it does not exist in the image cache he fetches it from Docker Hub Repo.
  • Creates new container based on the image (nginx: latest by default)
  • Gives it a virtual IP on a private network
  • opens up port 8080 on the host and forwards to port 80 in the container
  • Start container by using the CMD in the image Dockerfile
docker container ls
docker container ls -a
  • ls = show running containers
  • ls -a = show all containers in all status
docker container logs Mynginx
  • logs = show logs for the specific container (Mynginx)
[root@fedora ~]# docker container ls -a
CONTAINER ID   IMAGE     COMMAND                  CREATED          STATUS          PORTS                                 NAMES
b1311034dc4c   nginx     "/docker-entrypoint.…"   4 minutes ago    Created                                               Webhost2
6a936dd1a1ed   nginx     "/docker-entrypoint.…"   10 minutes ago   Up 10 minutes>80/tcp, :::844->80/tcp   Webhost
[root@fedora ~]# docker container rm -f b13 6a9
  • rm = Delete container
  • -f = force deletion of running container
  • b13,6a9 = containers ID’s

Whats going on inside a container

  • docker container top – show the process list in one container
root@fedora ~]# docker container top mysql
UID                 PID                 PPID                C                   STIME               TTY                 TIME                CMD
systemd+            56977               56956               0                   10:29               ?                   00:00:01            mysqld
  • docker container inspect – show details of one container configuration (Networking, mounts and more)
[root@fedora ~]# docker container inspect mysql 
        "Id": "5a1896ceb2cf076c64066183125e6b3814fb5e7109e392fde842230802837e31",
        "Created": "2021-08-18T07:29:01.167554924Z",
        "Path": "docker-entrypoint.sh",
        "Args": [

  • Docker container stats = show performance stats for all containers
CONTAINER ID   NAME      CPU %     MEM USAGE / LIMIT     MEM %     NET I/O       BLOCK I/O         PIDS
f793d1abdadc   nginx     0.00%     9.875MiB / 31.03GiB   0.03%     16.9kB / 0B   7.04MB / 0B       9
5a1896ceb2cf   mysql     0.18%     443.1MiB / 31.03GiB   1.39%     27.9kB / 0B   38.5MB / 2.03GB   3

Getting a shell inside containers

  • docker container run -it = start new container interactively (if you exit the shell the container stopped)
[root@fedora ~]# docker container run --name meninginx -it nginx bash
root@64aa3ad9a53c:/# ls
bin		      etc    mnt   sbin  var
boot		      home   opt   srv
dev		      lib    proc  sys
docker-entrypoint.d   lib64  root  tmp
docker-entrypoint.sh  media  run   usr
root@64aa3ad9a53c:/# hostname

To re-run a stopped container and enter to his shell

root@fedora ~]# docker container ls -a
CONTAINER ID   IMAGE     COMMAND                  CREATED          STATUS                       PORTS                                                  NAMES
64aa3ad9a53c   nginx     "/docker-entrypoint.…"   4 minutes ago    Exited (130) 7 seconds ago                                                          meninginx
f793d1abdadc   nginx     "/docker-entrypoint.…"   40 minutes ago   Up 40 minutes                80/tcp                                                 nginx
5a1896ceb2cf   mysql     "docker-entrypoint.s…"   44 minutes ago   Up 44 minutes      >3306/tcp, :::3306->3306/tcp, 33060/tcp   mysql
[root@fedora ~]# docker container start -ai meninginx 
root@64aa3ad9a53c:/# ls
bin  boot  dev	docker-entrypoint.d  docker-entrypoint.sh  etc	home  lib  lib64  media  mnt  opt  proc  root  run  sbin  srv  sys  tmp  usr  var
root@64aa3ad9a53c:/# ^C
  • docker container exec -it = open interactive shell to inside of the container
[root@fedora ~]# docker container exec -it mysql bash

Containers Resources

Check how much Resources container is using

root@master:~# docker stats nginx

To Limit container Memory

root@master:~# docker run -d --name nginx1 --memory "200mb"  nginx:alpine

root@master:~# docker stats nginx nginx1

CONTAINER ID        NAME                CPU %               MEM USAGE / LIMIT     MEM %               NET I/O             BLOCK I/O           PIDS
8271c48d56c7        nginx               0.00%               4.055MiB / 3.817GiB   0.10%               1.01kB / 0B         12.2MB / 16.4kB     3
08fa208e8474        nginx1              0.00%               3.77MiB / 200MiB      1.88%               726B / 0B           0B / 16.4kB         3

To Limit container CPU

  • –cpuset-cpus 0,1 = You assign cpu 0 and cpu 1 (total 2 cpu)
  • –cpuset-cpus 0-2 = You assgin cpu from 0 to 2 (total of 3 cpu)
root@master:~# grep "model name" /proc/cpuinfo
model name      : Intel(R) Core(TM) i7-8565U CPU @ 1.80GHz
model name      : Intel(R) Core(TM) i7-8565U CPU @ 1.80GHz
root@master:~# grep "model name" /proc/cpuinfo | wc -l

root@master:~# docker run -d --name nginx2 --memory "300mb" --cpuset-cpus 0,1 nginx:alpine

Copy files from&to your container – docker cp

Copy Files from docker host to container

root@master:~# docker cp index.html nginx1:/usr/share/nginx/html/index.html

Copy Files from container to docker host

root@master:~# docker cp nginx1:/opt/test.txt .

Install Zabbix Server 5.0 LTS on CentOS 8.

Zabbix Server depends on the following software applications:

  • MySQL database server
  • Apache web server
  • PHP with required extensions
  • For this installation I used this image CentOS-8.2.2004-x86_64-minimal

If you’re not a fan of SELinux, I recommend to you set it in Permissive mode.

setenforce 0 && sed -i 's/^SELINUX=.*/SELINUX=permissive/g' /etc/selinux/config

Install and configure Zabbix server for your platform

a. Install Zabbix repository

# rpm -Uvh <https://repo.zabbix.com/zabbix/5.0/rhel/8/x86_64/zabbix-release-5.0-1.el8.noarch.rpm>
# dnf clean all

b. Install Zabbix server, frontend, agent

# dnf install zabbix-server-mysql zabbix-web-mysql zabbix-apache-conf zabbix-agent

Install MySQL Server on CentOS 8

Install MySQL Database Server

sudo dnf install mysql-server

Activate the MySQL service using the command below:

sudo systemctl start mysqld.service
sudo systemctl enable mysqld

Secure MySQL by changing the default password for MySQL root:

Enter current password for root (enter for none): Press the Enter
Set root password? [Y/n]: Y
New password: <Enter root DB password>
Re-enter new password: <Repeat root DB password>
Remove anonymous users? [Y/n]: Y
Disallow root login remotely? [Y/n]: Y
Remove test database and access to it? [Y/n]: Y
Reload privilege tables now? [Y/n]: Y

Once Database server is installed, you need to create a database for Zabbix user:

c. Create initial database

Run the following on your database host.

Don’t forget to change the password befor you copy this code.

mysql -uroot -p
mysql> create database zabbix character set utf8 collate utf8_bin;
mysql> create user zabbix@localhost identified by 'password';
mysql> grant all privileges on zabbix.* to zabbix@localhost;
mysql> quit;

Import Zabbix Server database schema

zcat /usr/share/doc/zabbix-server-mysql*/create.sql.gz | mysql -uzabbix -p zabbix

d. Configure the database for Zabbix server

Edit file /etc/zabbix/zabbix_server.conf


Configure PHP for Zabbix frontend

e. Configure PHP for Zabbix frontend

Edit file /etc/php-fpm.d/zabbix.conf, uncomment and set the right timezone for you.

; php_value[date.timezone] = Asia/Jerusalem

File Example:

php_value[max_execution_time] = 300
php_value[memory_limit] = 128M
php_value[post_max_size] = 16M
php_value[upload_max_filesize] = 2M
php_value[max_input_time] = 300
php_value[max_input_vars] = 10000
php_value[date.timezone] = Asia/Jerusalem

Configure firewall

firewall-cmd --add-service={http,https} --permanent
firewall-cmd --add-port={10051/tcp,10050/tcp} --permanent
firewall-cmd --reload

Start Zabbix server and agent processes

f. Start Zabbix server and agent processes

Start Zabbix server and agent processes and make it start at system boot.

systemctl restart zabbix-server zabbix-agent httpd php-fpm
systemctl enable zabbix-server zabbix-agent httpd php-fpm

Open Zabbix URL: http://<server_ip_or_name>/zabbix in your browser.


Confirm that all pre-requisites are satisfied.

Configure DB settings

Finish installation

Configure Email notification’s

AWS S3 Bucket – Secure File Sharing

In this blog post, we will create s3 bucket with a policy that only allow us to connect to a specific folder in the bucket and from specific ip.

The Main Advantages of this service:

  • Unlimited storage
  • Low Cost
  • Ability to transfer data to Cold/Archive Storage
  • Limit Access by IP and Folder
  • Have backup/redundancy
  • Can be created in any region.


  • Hard to manage Users
  • Need basic knowledge with JSON and AWS
  • Limited to specific sftp client that support S3 Buckets

Lets Start,first lets create S3 Bucket

To create a bucket

  1. Sign in to the AWS Management Console and open the Amazon S3 console at https://console.aws.amazon.com/s3/.
  2. Choose Create bucket.
  3. In Bucket name, enter a DNS-compliant name for your bucket. The bucket name must:
    • Be unique across all of Amazon S3.
    • Be between 3 and 63 characters long.
    • Not contain uppercase characters.
    • Start with a lowercase letter or number.After you create the bucket, you can’t change its name. For information about naming buckets, see Rules for bucket naming in the Amazon Simple Storage Service Developer Guide. Important Avoid including sensitive information, such as account numbers, in the bucket name. The bucket name is visible in the URLs that point to the objects in the bucket.
  4. In Region, choose the AWS Region where you want the bucket to reside. Choose a Region close to you to minimize latency and costs and address regulatory requirements. Objects stored in a Region never leave that Region unless you explicitly transfer them to another Region. For a list of Amazon S3 AWS Regions, see AWS service endpoints in the Amazon Web Services General Reference.
  5. In Bucket settings for Block Public Access, choose the Block Public Access settings that you want to apply to the bucket. (Please leave all settings enabled )
  6. After you successfully created a bucket Lets enter to the bucket and create Home Folder and inside the Home Folder we will create 2 more folder 1 in the name Devops and the second IT.

Now lets Create the IAM Policy

To create your own IAM policy

  1. Sign in to the AWS Management Console and open the IAM console at https://console.aws.amazon.com/iam/.
  2. Choose Policies, and then choose Create Policy. If a Get Started button appears, choose it, and then choose Create Policy.
  3. In the create policy select the JSON Tab and paste this code. (Don’t forget to change the <Bucketname> and <YourpublicIP> in the JSON file to your actual bucket and your public ip where you coming from)
  4. Click on Review Policy give the policy a name and click on Create Policy
    "Version": "2012-10-17",
    "Statement": [
            "Sid": "AllowUsersToAccessFolder2Only",
            "Effect": "Allow",
            "Action": [
            "Resource": [
            "Effect": "Allow",
            "Action": [
            "Resource": [
            "Condition": {
                "StringLike": {
                    "s3:prefix": [
            "Effect": "Deny",
            "Action": "*",
            "Resource": "*",
            "Condition": {
                "NotIpAddress": {
                    "aws:SourceIp": [
                "Bool": {
                    "aws:ViaAWSService": "false"

After we created the policy lets create a IAM User and attached to him the new policy that we just created.

Creating IAM users (console)

You can use the AWS Management Console to create IAM users.

To create one or more IAM users (console)

  1. Sign in to the AWS Management Console and open the IAM console at https://console.aws.amazon.com/iam/.
  2. In the navigation pane, choose Users and then choose Add user.
  3. Type the user name for the new user.
  4. Select the type of access this set of users will have. We will select programmatic access

Type the name of the policy that you previously created

Click next and Create A user.

Save the access key ID and secret access key in a secure location we will use it to connect to our bucket.

Thats it!! Lets now connect to our S3 Bucket

  1. Download Winscp
  2. File Protocol – Amazon S3
  3. Click on advance and put the remote directory
  4. Enter the key ID and Access key and click login
%d bloggers like this: