Monday, 18 December 2017

How to Configure Chef CentOS/Redhat


We shall see how to deploy and configure chef on the local machine..


Chef Components


Chef consist of a Chef server, one or more workstations, and a node where the chef-client is installed.​​


Chef Server: This is the central hub server that stores the cookbooks and recipes uploaded from workstations, which is then accessed by chef-client for configuration deployment.

​​
Chef Workstations: This where recipes, cookbooks, and other chef configuration details are created or edited. All these are then pushed to the Chef server from the workstation, where they will be available to deploy to chef-client nodes.
​​
Chef Client: This the target node where the configurations are deployed in which the chef-client is installed. A node can be any machine (physical, virtual, cloud, network device, etc..)

Below are the prerequisites which I would leave for the reader to pre-configure..


DNS resolution should work between ChefServer, ChefWorkstation, ChefClient or else add in /etc/hosts. I am using CentOS 7 for this setup.


Hostnames​(Roles)​


cen1.localhost.com ​(ChefServer)

cen02.localhost.com (ChefClient​)​
fedora.localhost.com ​(ChefWorkstation​)​

Chef Server:


Download latest version of chef core.

​​
# wget https://packages.chef.io/files/stable/chef-server/12.17.5/el/7/chef-server-core-12.17.5-1.el7.x86_64.rpm
# rpm -ivh chef-server-core-12.17.5-1.el7.x86_64.rpm

Once the installation is complete, you must reconfigure the chef server components to make up the server to work together

​​#chef-server-ctl reconfigure
#

check the status of the server components using the following command...


​​# chef-server-ctl status​

run: bookshelf: (pid 1167) 64163s; run: log: (pid 1163) 64163s
run: nginx: (pid 10259) 63407s; run: log: (pid 1144) 64163s
run: oc_bifrost: (pid 1170) 64163s; run: log: (pid 1166) 64163s
run: oc_id: (pid 1168) 64163s; run: log: (pid 10087) 63510s
run: opscode-erchef: (pid 6741) 63791s; run: log: (pid 1192) 64163s
run: opscode-expander: (pid 1162) 64163s; run: log: (pid 1161) 64163s
run: opscode-solr4: (pid 1169) 64163s; run: log: (pid 1165) 64163s
run: postgresql: (pid 1160) 64163s; run: log: (pid 1159) 64163s
run: rabbitmq: (pid 1146) 64163s; run: log: (pid 1145) 64163s
run: redis_lb: (pid 7752) 63535s; run: log: (pid 1152) 64163s
#

We need to create an admin user. This user will have access to make changes to the infrastructure components in the organization we will be creating. Below command will generate the RSA private key automatically and should be saved to some location​

​​
# chef-server-ctl user-create admin admin admin admin@localhost.com password -f /etc/chef/admin.pem
ERROR: Error connecting to https://127.0.0.1/users/, retry 1/5
ERROR: Error connecting to https://127.0.0.1/users/, retry 2/5
ERROR: Error connecting to https://127.0.0.1/users/, retry 3/5

If you receive any error ERROR: Error connecting to https://127.0.0.1/users/, retry 1/5


which means there is another application which is already running on port 80 or 443. After, I had shutdown service it worked.​


​#chef-server-ctl user-create admin admin admin admin@localhost.com admin1 --filename /etc/chef/admin.pem


where,

  chef-server-ctl user-create -h will help you to understand above command.
<USERNAME FIRST_NAME [MIDDLE_NAME] LAST_NAME EMAIL PASSWORD>

Now, create an ORG name to hold the configurations.


#chef-server-ctl org-create localhost "localhost, Chef Server" --association_user admin --filename /etc/chef/localhost-validator.pem


where,

   chef-server-ctl org-create -h will help you to understand above command.

Make sure you have your firewall ports being opened for http, https.

​​
Chef Workstation:

Download the latest version of chefdk

​​
# wget https://packages.chef.io/files/stable/chefdk/2.4.17/el/7/chefdk-2.4.17-1.el7.x86_64.rpm
# rpm -ivh chefdk-2.4.17-1.el7.x86_64.rpm
# chef verify

Verification of component 'fauxhai' succeeded.

Verification of component 'kitchen-vagrant' succeeded.
Verification of component 'openssl' succeeded.
Verification of component 'delivery-cli' succeeded.
Verification of component 'git' succeeded.
Verification of component 'berkshelf' succeeded.
Verification of component 'tk-policyfile-provisioner' succeeded.
Verification of component 'opscode-pushy-client' succeeded.
Verification of component 'test-kitchen' succeeded.
Verification of component 'chefspec' succeeded.
Verification of component 'knife-spork' succeeded.
Verification of component 'inspec' succeeded.
Verification of component 'chef-dk' succeeded.
Verification of component 'chef-sugar' succeeded.
Verification of component 'chef-client' succeeded.
Verification of component 'chef-provisioning' succeeded.
Verification of component 'generated-cookbooks-pass-chefspec' succeeded.
Verification of component 'package installation' succeeded.

Install the git and generate chef repository.

​​
# yum install git -y
# chef generate repo chef-repo
# ls
chefignore  cookbooks  data_bags  environments  LICENSE  README.md  roles
#

You can add this directory to version control.

​​
# git config --global user.name "username"
# git config --global user.email "xyz@domainname.com"

Now, let's create a hidden directory called ".chef" under the chef-repo directory. This hidden directory will hold the RSA keys that we created on the Chef server.


# vim .gitignore

.chef
.gitignore
#

Commit all the existing changes

​​# git add .
# git commit -m "Initial Commit"

The RSA keys (.pem) generated when setting up the Chef Server will now need to be placed on the workstation. Place it under "~/chef-repo/.chef" directory.

​​
# scp -pr root@cen01.localhost.com:/etc/chef/admin.pem chef-repo/.chef
# scp -pr root@cen01.localhost.com:/etc/chef/localhost-validator.pem chef-repo/.chef

Knife is a command line interface for between a local chef-repo and the Chef server. To make the knife to work with your chef environment, we need to configure it by creating knife.rb in the "~/chef-repo/.chef/" directory.


Now, create and edit the knife.rb file


# cat knife.rb

current_dir = File.dirname(__FILE__)
log_level                :info
log_location             STDOUT
node_name                "admin"
client_key               "#{current_dir}/admin.pem"
validation_client_name   "localhost-validator"
validation_key           "#{current_dir}/localhost-validator.pem"
chef_server_url          "https://cen01.localhost.com/organizations/localhost"
syntax_check_cache_path  "#{ENV['HOME']}/.chef/syntaxcache"
cookbook_path            ["#{current_dir}/../cookbooks"]

node_name: This the username with permission to authenticate to the Chef server. Username should match with the user that we created on the Chef server.


client_key: The location of the file that contains user key that we copied over from the Chef server.


validation_client_name: This should be your organization's short name followed by -validator.


validation_key: The location of the file that contains validation key that we copied over from the Chef server. This key is used when a chef-client is registered with the Chef server.

chef_server_url: The URL of the Chef server. It should begin with https://, followed by IP addressor FQDN of Chef server, organization name at the end just after /organizations/.
{current_dir} represents ~/chef-repo/.chef/ directory, assuming that knife.rb file is in ~/chef-repo/.chef/. So you don't have to write the fully qualified path.

we must be fetching chef Serve​r​ SSL certificate on our workstation


​​# knife ssl fetch

WARNING: Certificates from cen01.localhost.com will be fetched and placed in your trusted_cert
​directory (/home/sunlnx/Documents/chef-repo/.chef/trusted_certs).
Knife has no means to verify these are the correct certificates. You should
​verify the authenticity of these certificates after downloading.
Adding certificate for cen01_localhost_com in /home/sunlnx/Documents/chef-repo/.chef/trusted_certs/cen01_localhost_com.crt
#

​#​knife client list

localhost-validator
#

​Bootstrapping


Bootstrapping a node is a process of installing chef-client on a target machine so that it can run as a chef-client node and communicate with the chef server


​#knife bootstrap cen02.localhost.com -x root -P ​<yourpassword> --sudo

Doing old-style registration with the validation key at /home/sunlnx/Documents/chef-repo/.chef/localhost-validator.pem...
Delete your validation key in order to use your user credentials instead
Connecting to cen02.localhost.com
cen02.localhost.com -----> Installing Chef Omnibus (-v 13)
cen02.localhost.com downloading https://omnitruck-direct.chef.io/chef/install.sh
cen02.localhost.com   to file /tmp/install.sh.4190/install.sh
cen02.localhost.com trying wget...
cen02.localhost.com el 7 x86_64
cen02.localhost.com Getting information for chef stable 13 for el...
cen02.localhost.com downloading https://omnitruck-direct.chef.io/stable/chef/metadata?v=13&p=el&pv=7&m=x86_64
cen02.localhost.com   to file /tmp/install.sh.4195/metadata.txt
​.
.
cen02.localhost.com Running handlers:
cen02.localhost.com Running handlers complete
cen02.localhost.com Chef Client finished, 0/0 resources updated in 05 seconds
​#

Once the bootstrapping is complete, list down the nodes using the following command.


​#knife node list
cen02.localhost.com
​#​


Thank you

Sunday, 12 November 2017

​Docker Swarm Basic Tutorial

I tried to provide at my best basic tutorial on Docker Swarm on how to configure/join workers to manager(master) nodes and run services. orchestrating service through Docker Swarm.
After reading this blog, you would definitely come across basic idea on the container orchestration... 

What is container Orchestration ?

Container Orchestration systems is where the next action is likely to be in the movement towards Building, Shipping, Running containers at scale. The list of most popular software that currently provides a solution for this are Kubernetes, Docker Swarm and others.

Why do we need container orchestration ?

Imagine you might had to run hundred of containers, you can easily see if they are running in a distributed mode, ensuring your cluster is up and running etc 

Few features are:

-  health checks on containers
-  Launch exact count of containers for an particular Docker image
-  Scaling number of containers up and down depending on the load
-  Perform rolling update on softwares across containers
-  ... more 

I hope reader would be aware of Docker basic commands ..

The first step is to create a set of Docker machines that will act as nodes in our Docker Swarm. I am going to create 5 Docker Machines, where one of them will act as the Manager (Leader) and the other will be worker nodes.

I will be using below Docker machines as hostnames with their roles ..

node1(192.168.0.28) <- Manager
node2(192.168.0.27) <- Worker
node3(192.168.0.26) <- Worker
node4(192.168.0.25) <- Worker
node5(192.168.0.24) <- Worker

Creating Swarm Cluster

Once your machines are setup, you can now proceed with setting up swarm. The first thing to do is to initilize swarm. Login to node1 and initilize swarm

root@node1# docker swarm init --advertise-addr 192.168.0.28
Swarm initialized: current node (wga1nmopjbir2ks92rxntj9dz) is now a manager.

To add a worker to this swarm, run the following command:

    docker swarm join --token SWMTKN-1-3j0boiejfa0dhp82ar51fl1yvsscuwk3n1jppvzafvbv6mycyo-13fkd65lfhh2d1yb31fd59tuk 192.168.0.28:2377

To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.
root@node1#

You will also notice that the output mentions the docker swarm join command to use in case you want another node to join as a worker. Keep in mind that you can have a node join as a worker or as a manager. At any point in time, there is only one LEADER and the other manager nodes will be as backup in case the current LEADER opts out.

root@node1# docker node ls
ID                            HOSTNAME            STATUS              AVAILABILITY        MANAGER STATUS
wga1nmopjbir2ks92rxntj9dz *   node1               Ready               Active            Leader
root@node1#

You could use tokens to join either other nodes as a manager or worker. see below on how to get tokens ?

Joining as Worker Node

To find out the join command for a worker, trigger below command 

root@node1# docker swarm join-token worker
To add a worker to this swarm, run the following command:
    docker swarm join --token SWMTKN-1-3j0boiejfa0dhp82ar51fl1yvsscuwk3n1jppvzafvbv6mycyo-13fkd65lfhh2d1yb31fd59tuk 192.168.0.28:2377
root@node1#

Joining as Manager Node

root@node1# docker swarm join-token manager
To add a manager to this swarm, run the following command:
    docker swarm join --token SWMTKN-1-3j0boiejfa0dhp82ar51fl1yvsscuwk3n1jppvzafvbv6mycyo-a9j3npou6i5pb90fs9h92ez3u 192.168.0.28:2377
root@node1#

Adding worker nodes to Swarm

you need to SSH to node{2..5} and join them as workers.

root@node2# docker swarm join --token SWMTKN-1-3j0boiejfa0dhp82ar51fl1yvsscuwk3n1jppvzafvbv6mycyo-13fkd65lfhh2d1yb31fd59tuk 192.168.0.28:2377
This node joined a swarm as a worker.
root@node2#

root@node3# docker swarm join --token SWMTKN-1-3j0boiejfa0dhp82ar51fl1yvsscuwk3n1jppvzafvbv6mycyo-13fkd65lfhh2d1yb31fd59tuk 192.168.0.28:2377
This node joined a swarm as a worker.
root@node3#

root@node4# docker swarm join --token SWMTKN-1-3j0boiejfa0dhp82ar51fl1yvsscuwk3n1jppvzafvbv6mycyo-13fkd65lfhh2d1yb31fd59tuk 192.168.0.28:2377
This node joined a swarm as a worker.
root@node4#

root@node5# docker swarm join --token SWMTKN-1-3j0boiejfa0dhp82ar51fl1yvsscuwk3n1jppvzafvbv6mycyo-13fkd65lfhh2d1yb31fd59tuk 192.168.0.28:2377
This node joined a swarm as a worker.
root@node5#

Login to manager node(node1) and check the status

root@node1# docker node ls
ID                            HOSTNAME            STATUS              AVAILABILITY        MANAGER STATUS
wga1nmopjbir2ks92rxntj9dz *   node1               Ready               Active         Leader
4wzc1nesi1hxa3ekq753w0koq     node2               Ready               Active
4nrsuwslw149mg4nwlipmxxlo     node3               Ready               Active
jbdizz7gvfdz03shtgu23h8kv     node4               Ready               Active
afpi9osujgcsqkdiqg4tnz6nb     node5               Ready               Active
root@node1#

you could also execute 'docker info' and check out swarm section.

Swarm: active
 NodeID: wga1nmopjbir2ks92rxntj9dz
 Is Manager: true
 ClusterID: 7xpyowph2y1q2q12rhrkzzfei
 Managers: 1  ==> Swarm markered as 1 manager among total 5 active nodes
 Nodes: 5
 Orchestration:
  Task History Retention Limit: 5
 Raft:
  Snapshot Interval: 10000
  Number of Old Snapshots to Retain: 0
  Heartbeat Tick: 1
  Election Tick: 3
 Dispatcher:
  Heartbeat Period: 5 seconds
 CA Configuration:
  Expiry Duration: 3 months
  Force Rotate: 0
 Autolock Managers: false
 Root Rotation In Progress: false
 Node Address: 192.168.0.28
 Manager Addresses:
  
Testing Swarm orchestration
we have swarm up and running, its time to schedule our containers on it. we will not focus on the application and no need for us to worry where the appilcation is going to run. 
we will tell the manager to run the containers for us and it will take care of scheduling our container and send these commands to our nodes to distribute.

we will run 'nginx' container and expose 'port 80'. we could specify number of instances to launch via 'replicas'. 

All would be runing from the 'node1' managerical node and we could use for administration. 

root@node1# docker service create --replicas 5 -p 80:80 --name web nginx
lwvjgxcxrg5wqw2fwrgzosnnw 
Since --detach=false was not specified, tasks will be created in the background.
In a future release, --detach=false will become the default.
root@node1#

you could now see the status of the service being orchestrated to different nodes.

root@node1# docker service ls                                                                     PORTS  
ID                  NAME                MODE                REPLICAS            nginx:latest        *:80->80/tcp
MAGE               PORTS
lwvjgxcxrg5w        web                 replicated          8/8                 nginx:latest        *:80->80/tcp
root@node1#

root@node1# docker service ps web
ID                  NAME                IMAGE               NODE                DESIRED STATE       CURRENT STATE           ERROR               PORTS
l00c9u3kmfbn        web.1               nginx:latest        node1               Running             Running 5 minutes ago
nzzi7z3wvikl        web.2               nginx:latest        node2               Running             Running 5 minutes ago
91uizczo4dp3        web.3               nginx:latest        node3               Running             Running 5 minutes ago
nor39f9gu3j2        web.4               nginx:latest        node4               Running             Running 5 minutes ago
mtn3et6gu5on        web.5               nginx:latest        node5               Running             Running 5 minutes ago
root@node1#

Accessing service

you can acess the service by hitting any of the manager or worker node. It does not matter any particular node does not have any container scheduled on it.


root@node1# curl http://192.168.0.28:80
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>
root@node1#

Scaling up and Scaling down

Currently we are running 5 containers and lets assume we require 10 containers, fire below to scale up. 

root@node1# docker service scale web=10

root@node1# docker service ps web
ID                  NAME                IMAGE               NODE                DESIRED STATE       CURRENT STATE            ERROR               PORTS
l00c9u3kmfbn        web.1               nginx:latest        node1               Running             Running 18 minutes ago
nzzi7z3wvikl        web.2               nginx:latest        node2               Running             Running 18 minutes ago
91uizczo4dp3        web.3               nginx:latest        node3               Running             Running 18 minutes ago
nor39f9gu3j2        web.4               nginx:latest        node4               Running             Running 18 minutes ago
mtn3et6gu5on        web.5               nginx:latest        node5               Running             Running 18 minutes ago
t1reh7fnzfg9        web.6               nginx:latest        node3               Running             Running 10 minutes ago
qiyegknsz90w        web.7               nginx:latest        node1               Running             Running 10 minutes ago
zqsafvln29ft        web.8               nginx:latest        node2               Running             Running 10 minutes ago
mx6ezdng0n1i        web.9               nginx:latest        node4               Running             Running 16 seconds ago
xrmugsonlbvl        web.10              nginx:latest        node5               Running             Running 16 seconds ago
root@node1# 

you can reduce with same command, it would reduce your containers on the nodes.
root@node1# docker service scale web=5 

Draining node

whenever the node is ACTIVE, it will be always ready to accept tasks from Manager. 

root@node1#docker node ls
ID                            HOSTNAME            STATUS              AVAILABILITY        MANAGER STATUS
wga1nmopjbir2ks92rxntj9dz *   node1               Ready               Active              Leader
4wzc1nesi1hxa3ekq753w0koq     node2               Ready               Active
4nrsuwslw149mg4nwlipmxxlo     node3               Ready               Active
jbdizz7gvfdz03shtgu23h8kv     node4               Ready               Active
afpi9osujgcsqkdiqg4tnz6nb     node5               Ready               Active
root@node1#

When the node is active, it can receive new tasks:
- during a service update to scale up
- during a rolling update
- when you set another node to Drain availability
- when a task fails on another active node

But sometimes, we have to bring the Node down for some maintenance reason. This meant by setting the Availability to Drain mode. Let us try that with one of our nodes.

since there are 10 containers runing, each node is having 2 containers on it. lets made one of the node drain and see how the containers are being orchestrated without disrupting service.

root@node1# docker inspect node5
.
.
<snip>
        "Spec": {
            "Labels": {},
            "Role": "worker",
            "Availability": "active"  ==> status set to active
.
.
<snip>
            }
        },
        "Status": {
            "State": "ready",
            "Addr": "192.168.0.24"
        }
<snip>


root@node1# docker node update --availability drain node5
node5
root@node1#

root@node1# docker service ps web
ID                  NAME                IMAGE               NODE                DESIRED STATE       CURRENT STATE                     ERROR               PORTS
l00c9u3kmfbn        web.1               nginx:latest        node1               Running             Running 30 minutes ago
nzzi7z3wvikl        web.2               nginx:latest        node2               Running             Running 30 minutes ago
91uizczo4dp3        web.3               nginx:latest        node3               Running             Running 30 minutes ago
nor39f9gu3j2        web.4               nginx:latest        node4               Running             Running 30 minutes ago
e5pdetk2lghs        web.5               nginx:latest        node3               Running             Running less than a second ago
mtn3et6gu5on         \_ web.5           nginx:latest        node5               Shutdown            Shutdown 1 second ago
t1reh7fnzfg9        web.6               nginx:latest        node3               Running             Running 21 minutes 
agoqiyegknsz90w        web.7               nginx:latest        node1               Running             Running 21 minutes 
agozqsafvln29ft        web.8               nginx:latest        node2               Running             Running 21 minutes 
agomx6ezdng0n1i        web.9               nginx:latest        node4               Running             Running 11 minutes ago
m23nbrdvb635        web.10              nginx:latest        node4               Running             Running less than a second ago
xrmugsonlbvl         \_ web.10          nginx:latest        node5               Shutdown            Shutdown less than a second ago
root@node1#

root@node1# docker inspect node5
<snip>
"Spec": {
            "Labels": {},
            "Role": "worker",
            "Availability": "drain"
<snip>

You could now see the containers on the node5 has been re-scheduled on node3 and node4. 
you could now get on with maintenance on node5 and once they are up, you would require to make them into active state. On active state, containers will again get replicated.

Remove service

You could remove the service with 'rm' command.
root@node1# docker service rm web

Rolling upgrade

root@node1# docker service update --image <imagename>:<version> web

Thanks for re-sharing...

Saturday, 9 September 2017

Configure Mail Server & Setting up Mail Client: Fedora Linux 25

​Here, I would let you know how to configure mail server(MTA) and client(MUA) using evolution which is by default.

Environment: Fedora 25

​we will understand few components before we setup email configurations/settings

MUA(Mail User Agent) or Mail Client: Application used to write/send/read email messages.
e.g Evolution, KMail, Outlook etc.. text based mail clients like pine, mail ..etc 

MTA(Mail Transfer Agent):Transferring email messages from one computer to another(intranet or Internet). We would be configuring postfix in this section

MDA(Mail Delivery Agent): It will receive emails from the MTA and delivers them to relevant mailbox MUA(e.g Dovecot). There are few of the popular MDA which would remove unwanted email messages or spam before they reach MUA Inbox.(e.g Procmail ..etc )

SMTP(Simple Mail Transfer Protocol): communicates language that the MTA use to talk to each other and transfer message back and forth.

Architecture


Configuring MTA

Login as 'root' to perform below steps.
Note: SElinux was disabled.

- install postfix.
#dnf install postfix -y

-Take a backup copy of the file and copy paste below contents and change according to your infra setup.(lines marked in red)

#mv /etc/postfix/main.cf /etc/postfix/main.cf.original

# cat /etc/postfix/main.cf
     1  compatibility_level = 2
     2  queue_directory = /var/spool/postfix
     3  command_directory = /usr/sbin
     4  daemon_directory = /usr/libexec/postfix
     5  data_directory = /var/lib/postfix
     6  mail_owner = postfix
     7  myhostname = fedora.localhost.com
     8  mydomain = localhost.com
     9  myorigin = $mydomain
    10  inet_interfaces = $myhostname
    11  inet_protocols = ipv4
    12  mydestination = $myhostname, localhost.$mydomain, localhost, $mydomain
    13  unknown_local_recipient_reject_code = 550
    14  mynetworks = 192.168.122.0/24, 127.0.0.0/8, 10.0.0.0/24
    15  alias_maps = hash:/etc/aliases
    16  alias_database = hash:/etc/aliases
    17  smtpd_banner = $myhostname ESMTP
    18  debug_peer_level = 2
    19  debugger_command =
    20           PATH=/bin:/usr/bin:/usr/local/bin:/usr/X11R6/bin
    21           ddd $daemon_directory/$process_name $process_id & sleep 5
    22  sendmail_path = /usr/sbin/sendmail.postfix
    23  newaliases_path = /usr/bin/newaliases.postfix
    24  mailq_path = /usr/bin/mailq.postfix
    25  setgid_group = postdrop
    26  html_directory = no
    27  manpage_directory = /usr/share/man
    28  sample_directory = /usr/share/doc/postfix/samples
    29  readme_directory = /usr/share/doc/postfix/README_FILES
    30  meta_directory = /etc/postfix
    31  shlib_directory = /usr/lib64/postfix
    32  message_size_limit = 10485760
    33  mailbox_size_limit = 1073741824
    34  smtpd_sasl_type = dovecot
    35  smtpd_sasl_path = private/auth
    36  smtpd_sasl_auth_enable = yes
    37  smtpd_sasl_security_options = noanonymous
    38  smtpd_sasl_local_domain = $myhostname
    39  smtpd_recipient_restrictions = permit_mynetworks,permit_auth_destination,permit_sasl_authenticated,reject

- start service persistent after reboot 
# systemctl start postfix
# systemctl enable postfix

- If you had your firewall being running, add service 'smtp'. 
#firewall-cmd --add-service=smtp --permanent
#firewall-cmd --reload

Configuring MDA

- install dovecot
#dnf install dovecot -y

-Take a backup copy of the file and copy paste below contents and change according to your infra setup.(lines marked in red)

#mv /etc/dovecot/dovecot.conf /etc/dovecot/dovecot.conf.original
#mv /etc/dovecot/conf.d/10-auth.conf /etc/dovecot/conf.d/10-auth.conf.original
#mv /etc/dovecot/conf.d/10-mail.conf /etc/dovecot/conf.d/10-mail.conf.original
#mv /etc/dovecot/conf.d/10-master.conf /etc/dovecot/conf.d/10-master.conf.original
#mv /etc/dovecot/conf.d/10-ssl.conf /etc/dovecot/conf.d/10-ssl.conf.original

# cat /etc/dovecot/dovecot.conf 
     1  protocols = imap pop3 lmtp
     2  listen = *,::
     3  dict {
     4  }
     5  !include conf.d/*.conf
     6  !include_try local.conf
#

# cat /etc/dovecot/conf.d/10-auth.conf
     1  disable_plaintext_auth = no
     2  auth_mechanisms = plain login
     3  !include auth-system.conf.ext
#

# cat /etc/dovecot/conf.d/10-mail.conf

     1  mail_location = maildir:~/Maildir
     2  namespace inbox {
     3    inbox = yes
     4  }
     5  protocol !indexer-worker {
     6  }
     7  mbox_write_locks = fcntl
#

# cat /etc/dovecot/conf.d/10-master.conf
     1   service imap-login {
     2    inet_listener imap {
     3    }
     4    inet_listener imaps {
     5    }
     6  }
     7  service pop3-login {
     8    inet_listener pop3 {
     9    }
    10    inet_listener pop3s {
    11    }
    12  }
    13  service lmtp {
    14    unix_listener lmtp {
    15    }
    16  }
    17  service imap {
    18  }
    19  service pop3 {
    20  }
    21  service auth {
    22    unix_listener auth-userdb {
    23  }
    24    unix_listener /var/spool/postfix/private/auth {
    25      mode = 0666
    26      user = postfix
    27      group = postfix
    28    }
    29  }
    30  service auth-worker {
    31  }
    32  service dict {
    33    unix_listener dict {
    34    }
    35  }
#

# cat /etc/dovecot/conf.d/10-ssl.conf
     1  ssl = required
     2  ssl_cert = </etc/pki/dovecot/certs/dovecot.pem
     3  ssl_key = </etc/pki/dovecot/private/dovecot.pem
     4  ssl_cipher_list = PROFILE=SYSTEM
#

- start service persistent after reboot 
# systemctl start dovecot
# systemctl enable dovecot

- If you had your firewall being running, add service 'smtp'. 
#firewall-cmd --add-service={pop3,imap} --permanent
#firewall-cmd --reload

Configure MUA

Click on 'Evolution' and configure as per below ...

Edit -> Preferences -> Add ->Next  





Leave rest to default and continue to check [OK] .

Testing

Compose email and read yourself :)




Thanks