Thứ Ba, 27 tháng 2, 2018

Installing EPEL Repository to Oracle Linux 7

Installing EPEL Repository to Oracle Linux 7

Extra Packages for Enterprise Linux (or EPEL) is a Repository that holds high quality extra packages that Red Hat Enterprise Linux (RHEL) base distributions (like Oracle Linux 7) does not include. Installation of EPEL in Oracle Linux 7 is simple. We need to download and install an rpm package that EPEL provides.
As root (or using sudo):
?
1
2
3
4
5
 
# rpm -ivh epel-release-latest-7.noarch.rpm
 
# yum update

EPEL contains a big amount of software. You can view the packages that are compatible with Oracle Linux 7 (x86 64bit) for in the following link:

https://dl.fedoraproject.org/pub/epel/7/x86_64/repoview/

You can find more information for EPEL in project wiki
EPEL official wiki page: https://fedoraproject.org/wiki/EPEL
Read More

Thứ Hai, 26 tháng 2, 2018

Oracle Database Administration Scripts | DBA Bundle

Introduction:

In this post, I'll share with you one of the most helpful tools I ever created, it will make database administration day to day activities more easier, faster and safer for you.

I named it DBA Bundle , it's a tar file contains a group of shell scripts, you can DOWNLOAD the latest version from this link: [V. 4.1  22-Feb-2018]
https://www.dropbox.com/s/xn0tf2pfeq04koi/DBA_BUNDLE4.tar?dl=0

I've designed all scripts to be able to run on complicated environments, whereas there is one or more Oracle version / Oracle Home installed on the same machine.

All scripts inside the bundle can easily recognize the Oracle environment whether it's a Linux or Unix a long with databases version, Oracle homes. 
All scripts can smartly handle user's wrong inputs as well.

Now let's get started with the top key features in this bundle ...

How to use the bundle:

First, download the bundle tar file and extract it under Oracle owner home directory, e.g. /home/oracle, each script inside this bundle is independent, in other words, the absence of any script will not affect the execution of other scripts.

Second, from the bundle extracted directory, run "aliases_DBA_BUNDLE.sh" script using "." command e.g.  . aliases_DBA_BUNDLE.sh
It will add an alias for each script to the user's profile to make it easy for you to call any script from OS shell under any working directory using one command "alias" without the need to step under the bundle directory.
Aliases will be displayed to you a long with its description in a tabular format. no need to memorize it, in case you want to list it again just type "bundle" command.

Please note that when you run any interactive script in this bundle it will prompt you to select the database number from the displayed list in case you have more than one up & running database.

For a script like "aliases_DBA_BUNDLE.sh" when you select a database from the list, aliases like "alert, tns, bdump,..." will automatically point to the respective files to the database you have chosen from the list.
e.g. if you have two running instances (orcl & salesdb), and you want to open the alertlog of salesdb, just run "aliases_DBA_BUNDLE.sh"  scripts, enter salesdb number from the displayed list, and then type alert/vialert to view salesdb alertlog files.


If you didn't run "aliases_DBA_BUNDLE.sh" script, for each time you want to call a script, you will need to step under the bundle location and then execute the script from there.

Scripts Description:
Now let me give you a brief description of each script in this bundle:


Generic AliasesAssociated with the Default Selected Database
Shell AliasDescription
bundleSet a database as a default database (all generic aliases will be associated to this database)
alertOpen the Database Alertlog with tail -f
vialertOpen the Database Alertlog with vi editor
raclogOpen the Clusterware/Oracle Restart Alertlog
sqlOpen sqlplus ‘/ as sysdba’
pList all Running Database Instances (PMON Processes)
lsnList Running Listeners
lisOpen listener.ora file with vi editor
tnsOpen tnsnames.ora file with vi editor
pfileOpen the default instance PFILE with vi editor
spfileOpen the default instance SPFILE with view editor
ohGo to $ORACLE_HOME directory
dbsGo to $ORACLE_HOME/dbs
audGo to $ORACLE_HOME/rdbms/audit
bdumpGo to BACKGROUND_DUMP_DEST
networkGo to $ORACLE_HOME/network/admin
removebundleRemove all the bundle Aliases from the System. (When you run it, let me know the reason)
Scripts With aliases

Script NameShell AliasDescription
table_info.sh tableinfoShow specific table’s important info (size, indexes, non indexed FK, constraints,...).
oradebug.sh  oradebugGenerate Hang Analysis report using oradebug tool in case of instance hang. (New script in V3.6)
active_sessions.sh activeShow the current active sessions and their blocking sessions, along with long running operations + Current running jobs + long running queries. (New script in V3.6)
session_details.sh sessionList the Details of a specific user session. (If no input provided, it will list all sessions on the instance).
all_sessions_info.shsessionsList All connected sessions on all running instances [RAC DB] along with their distribution in details.
process_info.sh spidShow the DB Session details when providing its Unix PID.
sql_id_details.shsqlidShow the details of a specific SQL STATEMENT by providing its SQL_ID and gives you the option of tuning it using SQL TUNING ADVISOR.
Co-author: Farrukh Salman [Karkoor]
asmdisks.shasmdisksShow ASM Diskgroups and their size, ASM disks, ASM disks mount points
On OS.
tablespaces.sh tbsList All TABLESPACES, ASM Disk Groups and FRA (if was configured) allocated size and free space details.
datafiles.sh datafilesList All DATAFILES and their size.
export_data.sh  exportdataExport Full DB|SCHEMA|TABLE data. (Gives you the option of using exp or expdp utility for the export).
RMAN_full.sh     rmanfullTakes an online RMAN full backup for the database (gives you the option of number of channels/compressed/Encrypted backup type).
Archives_Delete.sharchivedelDelete all Archive logs older than (provided) number of days.
analyze_objects.shAnalyzeAnalyze All tables under a specific SCHEMA (using ANALYZE legacy command).
gather_stats.sh gatherGather STATISTICS on a SCHEMA or TABLE using DBMS_STATS. http://dba-tips.blogspot.ae/2014/09/script-to-ease-gathering- statistics-on.html
rebuild_table.shtablerebuildRebuild a table and its related indexes.
db_locks.sh                       locksList Blocking LOCKS details on the database (blocking users, blocking locks on objects, long running operations).
db_jobs.shjobsList All database Jobs (dba_jobs + dba_scheduler_jobs + Auto Tune Tasks and current running jobs and their wait status) + job DDL & its history if provided its name/number.  
invalid_objects.shinvalidList All invalid Objects on the DB + their compile statements.
biggest_100_objects.shobjectsList the Biggest 100 Objects on the database.
object_size.shobjectsizeCalculate any object size + its indexes size.
lock_user.shlockuserLock a specific DB User Account and expire the password.
unlock_user.shunlockuserUnlock a specific DB User Account + the option of reset the account password.
audit_records.sh  auditRetrieve AUDIT data for a DB user in a specific date or number of days back. http://dba-tips.blogspot.ae/2014/02/extract-oracle-audit-records- script.html
last_logon_report.shlastloginShow the last login date of ALL users in the database.
failed_logins.shfailedloginShow the failed login attempts in the last provided n number of days.
parameter_val.sh parmShow the value of a Visible/Hidden initialization Parameter.
user_details.sh   userdetailGenerate the DDL Creation script for a DB user + its privileges important info about its schema and objects.
user_ddl.shuserddlGenerate the DDL Creation script for a DB user + its privileges important info about its schema and objects.
object_ddl.sh objectddlGenerate DDL script for a database Object + its granted permissions on it.
role_ddl.shroleddlGenerate DDL script for a database role.
start_tracing.sh    starttraceStart TRACING an Oracle session activities in a trace file.  http://dba-tips.blogspot.ae/2014/02/script-to-trace-oracle-sesson.html  
stop_tracing.sh stoptraceStop TRACING an already traced Oracle session & provide the session’s trace file and its TKPROFED log version.  http://dba-tips.blogspot.ae/2014/02/script-to-trace-oracle-sesson.html  
oracle_cleanup.shcleanupBackup & Clean up All DBs & Listeners’ Logs. http://dba-tips.blogspot.ae/2014/02/oracle-logs-cleanup-script.html
Scripts Without aliases
Script NameDescription
dbalarm.shMonitors ALERTLOGs of ALL Databases and Listeners log running on the server and instantly report ORA- errors and TNS- errors that appears in these logs to the DBA E-mail Address by sending a detailed email to the DBA along with monitoring CPU, Filesystem/FRA/Tablespaces utilization, blocking locks.
(you have to modify this parameter in line number 27 to point to your E-mail Address):
MAIL_LIST="youremail@yourcompany.com"
Note: sendmail service should be configured on the server.
*The best way to use this script is by schedule it to run in the crontab every 5 minutes (or less).
For more details:
http://dba-tips.blogspot.ae/2014/02/database-monitoring-script- for-ora-and.html
dbdailychk.shPerform the following health checks on all running databases on the server:
# CHECKING ALL DATABASES ALERTLOGS FOR ERRORS.
# CHECKING ALL LISTENERS ALERTLOGS FOR ERRORS.
# CHECKING CPU UTILIZATION.
# CHECKING FILESYSTEM UTILIZATION.
# CHECKING TABLESPACES UTILIZATION.
# CHECKING FLASH RECOVERY AREA UTILIZATION.
# CHECKING ASM DISKGROUPS UTILIZATION.
# CHECKING BLOCKING SESSIONS ON THE DATABASE.
# CHECKING UNUSABLE INDEXES ON THE DATABASE.
# CHECKING INVALID OBJECTS ON THE DATABASE.
# CHECKING FAILED LOGIN ATTEMPTS ON THE DATABASE.
# CHECKING AUDIT RECORDS ON THE DATABASE.
# CHECKING CORRUPTED BLOCKS ON THE DATABASE.
# CHECKING FAILED JOBS IN THE DATABASE.
# CHECKING ACTIVE INCIDENTS.
# CHECKING OUTSTANDING ALERTS.
# CHECKING DATABASE SIZE GROWTH.
# CHECKING OS / HARDWARE STATISTICS.
# CHECKING RESOURCE LIMITS.
# CHECKING RECYCLEBIN.
# CHECKING CURRENT RESTORE POINTS.
# CHECKING HEALTH MONITOR CHECKS RECOMMENDATIONS THAT RUN BY DBMS_HM PACKAGE.
# CHECKING MONITORED INDEXES.
# CHECKING REDOLOG SWITCHES.
# CHECKING MODIFIED INITIALIZATION PARAMETERS SINCE THE LAST DB STARTUP.
# CHECKING ADVISORS RECOMMENDATIONS:

Replace youremail@yourcompany.com template with your e-mail address.
You can also customize the defined thresholds as per your preferences under THRESHOLD section inside the script.
Last step, Schedule the script to run in the crontab e.g. one time early morning:
0 6 * * * /home/oracle/dbdailychk.sa
For more details:
http://dba-tips.blogspot.ae/2015/05/oracle-database-health-check- script.html
delete_standby_archives.shDeletes the applied Archives on STANDBY DATABASES
older than N hours (specified by the user). To be customized and scheduled from the crontab.
For more details:
http://dba-tips.blogspot.ae/2017/01/script-to-delete-applied- archivelogs-on.html
COLD_BACKUP.shTakes a COLD BACKUP of a specific database
(But the beauty of this script once it take the cold backup it will generate another script to help you to restore the taken cold backup easily)
This script will perform the following activities: shutdown the database, take a cold backup, create a restore script (in case you want to restore this cold backup later) then it will automatically startup the database.
For more Details:
http://dba-tips.blogspot.ae/2014/02/cold-backup-script.html
SHUTDOWN_All.shSHUTDOWN ALL running Databases & Listeners on The server. Keep away from children :-)
schedule_rman_full_bkp.shTakes an RMAN Full backup for a specific database. Can be scheduled in the crontab.
You MUST adjust the variables/channels/maintenance section to match your environment.
schedule_rman_image_copy_bkp.shTakes an RMAN Image/Copy for a specific database. Can be scheduled in the crontab.
You MUST adjust the variables/channels/maintenance sections to match your env.

Why consider RMAN image backups in your backup strategy? the answer is in this link:
http://dba-tips.blogspot.ae/2011/11/switch-database-to-rman-copy-backup-and.html
delete_applied_archives_on_standby.shDeletes the applied archivelogs on a standby DB.
For More Details:
http://dba-tips.blogspot.com/2017/01/script-to-delete-applied-archivelogs-on.html
configuration_baseline.shCollects all kind of configuration baseline data for OS and all running DATABASES to help you track and control the changes on your environment.
For more details:
http://dba-tips.blogspot.com/2016/12/configuration-baseline-script-for-linux.html
backup_ctrl_spf_AWR.shBackup Controlfile as (Trace/RMAN BKP), Backup SPFILE and Generate AWR for full day This script can be scheduled in the crontab to run once a day. Script options/variables must be modified to match your environment. New Starting from V4.1
kill_long_running_queries.shKill all queries running longer than 2.5 Hours(can be customized inside the script) by specific modules (to be specified inside the script)
This script can be scheduled in the crontab. Script options/variables MUST be modified to get the killing criteria match your requirements. New Starting from V4.1
check_standby_lag.shIf you have a standby DB then you can use this script on Primary DB site to report to you any LAG happens between the primary and the standby DB.
The variables section at the top of the script must be populated by you to match your environment or the script will not be able to run.
This script can be scheduled in the crontab to run every 5 minutes. New Starting from V4.1
This link will show you how to use this script:
http://dba-tips.blogspot.ae/2017/11/shell-script-to-check-lag-sync-status.html
From time to time I'll keep updating this bundle with new scripts, fixing bugs and adding new features, so give this topic a visit at least once every 3 months to download the latest version.

As I mentioned in each script, I'M SHARING THIS BUNDLE AND ITS SCRIPTS IN THE HOPE THAT IT WILL BE USEFUL FOR YOU, BUT WITHOUT ANY WARRANTY. ALL SCRIPTS IN THIS BUNDLE ARE PROVIDED "AS IS".

  No one is perfect... that's why pencils have erasers.

You can download older versions from below links : 
https://app.box.com/s/l4cmpxfrfy8t6emqrpgo   [V. 1.1]
https://www.dropbox.com/s/mh0rk14alc69gqj/DBA_BUNDLE1_Sep2014.tar?dl=0   [V. 1.7 Sep2014]
https://www.dropbox.com/s/vrhslrg4l5xhzyb/DBA_BUNDLE1.tar?dl=0      [V. 1.8 Oct2014]
https://www.dropbox.com/s/lgrprfazgkeoxb5/DBA_BUNDLE2.tar?dl=0  [V. 2.0 08-May-2015]
https://www.dropbox.com/s/wnzvp49cyamqu66/DBA_BUNDLE2_Oct2015.tar?dl=0 [V. 2.2 Oct-2015]
https://www.dropbox.com/s/a1wn1j1squjf1qx/DBA_BUNDLE2_6Feb2016.tar?dl=0 [V2.3 Feb2016]
https://www.dropbox.com/s/lgrprfazgkeoxb5/DBA_BUNDLE2_25Apr2016.tar?dl=0 [V2.4 Apr2016]
https://www.dropbox.com/s/gxxb7jws8xngurj/DBA_BUNDLE3_10Oct2016.tar?dl=0 [V3.1 Oct2016]
https://www.dropbox.com/s/5wj52xqse9wcu6l/DBA_BUNDLE3_7Dec2016.tar?dl=0 [V3.3 Dec2016]
https://www.dropbox.com/s/c5tvgvvs3c8b749/DBA_BUNDLE3_3Jan2017.tar?dl=0 [V3.4 Jan2017]

Your suggestions, bug reporting and comments are most welcome :-)


Lastly, A special thank you to Abd El-Gawad Othman, without his support, suggestions and encouragement I wouldn't have been confident enough to share this bundle with you.
Read More

Thứ Tư, 21 tháng 2, 2018

How To Set Up Multi-Factor Authentication for SSH on CentOS 7

Introduction

An authentication factor is a single piece of information used to to prove you have the rights to perform an action, like logging into a system. An authentication channel is the way an authentication system delivers a factor to the user or requires the user to reply. Passwords and security tokens are examples of authentication factors; computers and phones are examples of channels.
SSH uses passwords for authentication by default, and most SSH hardening instructions recommend using an SSH key instead. However, this is still only a single factor. If a bad actor has compromised your computer, then they can use your key to compromise your servers as well.
In this tutorial, we'll set up multi-factor authentication to combat that. Multi-factor authentication (MFA) requires more than one factor in order to authenticate, or log in. This means a bad actor would have to compromise multiple things, like both your computer and your phone, to get in. The different type of factors are often summarized as:
  1. Something you know, like a password or security question
  2. Something you have, like an authenticator app or security token
  3. Something you are, like your fingerprint or voice
One common factor is an OATH-TOTP app, like Google Authenticator. OATH-TOTP (Open Authentication Time-Based One-Time Password) is an open protocol that generates a one-time use password, commonly a 6 digit number that is recycled every 30 seconds.
This article will go over how to enable SSH authentication using an OATH-TOTP app in addition to an SSH key. Logging into your server via SSH will then require two factors across two channels, thereby making it more secure than a password or SSH key alone. In addition, we'll go over some additional use cases for MFA and some helpful tips and tricks.

Prerequisites

To follow this tutorial, you will need:
  • One CentOS 7 server with a sudo non-root user and SSH key, which you can set up by following this Initial Server Setup tutorial.
  • A smartphone or tablet with an OATH-TOTP app installed, like Google Authenticator (iOSAndroid).

Step 1 — Installing Google's PAM

In this step, we'll install and configure Google's PAM.
PAM, which stands for Pluggable Authentication Module, is an authentication infrastructure used on Linux systems to authenticate a user. Because Google made an OATH-TOTP app, they also made a PAM that generates TOTPs and is fully compatible with any OATH-TOTP app, like Google Authenticator or Authy.
First, we need to add the EPEL (Extra Packages for Enterprise Linux) repo.
  • sudo yum install https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
Next, install the PAM. You may be prompted to accept the EPEL key if this is the first time using the repo. Once accepted you won't be prompted for again to accept the key.
  • sudo yum install google-authenticator
With the PAM installed, we'll use a helper app that comes with the PAM to generate a TOTP key for the user you want to add a second factor to. This key is generated on a user-by-user basis, not system-wide. This means every user that wants to use a TOTP auth app will need to log in and run the helper app to get their own key; you can't just run it once to enable it for everyone (but there are some tips at the end of this tutorial to set up or require MFA for many users).
Run the initialization app.
  • google-authenticator
After you run the command, you'll be asked a few questions. The first one asks if authentication tokens should be time-based.
Output
Do you want authentication tokens to be time-based (y/n) y
This PAM allows for time-based or sequential-based tokens. Using sequential-based tokens mean the code starts at a certain point and then increments the code after every use. Using time-based tokensmean the code changes randomly after a certain time elapses. We'll stick with time-based because that is what apps like Google Authenticator anticipate, so answer y for yes.
After answering this question, a lot of output will scroll past, including a large QR code. At this point, use your authenticator app on your phone to scan the QR code or manually type in the secret key. If the QR code is too big to scan, you can use the URL above the QR code to get a smaller version. Once it's added, you'll see a six digit code that changes every 30 seconds in your app.
Note: Make sure you record the secret key, verification code, and the recovery codes in a safe place, like a password manager. The recovery codes are the only way to regain access if you, for example, lose access to your TOTP app.


The remaining questions inform the PAM how to function. We'll go through them one by one.
Output
Do you want me to update your "/home/sammy/.google_authenticator" file (y/n) y
This writes the key and options to the .google_authenticator file. If you say no, the program quits and nothing is written, which means the authenticator won't work.
Output
Do you want to disallow multiple uses of the same authentication token? This restricts you to one login about every 30s, but it increases your chances to notice or even prevent man-in-the-middle attacks (y/n) y
By answering yes here, you are preventing a replay attack by making each code expire immediately after use. This prevents an attacker from capturing a code you just used and logging in with it.
Output
By default, tokens are good for 30 seconds. In order to compensate for possible time-skew between the client and the server, we allow an extra token before and after the current time. If you experience problems with poor time synchronization, you can increase the window from its default size of +-1min (window size of 3) to about +-4min (window size of 17 acceptable tokens). Do you want to do so? (y/n) n
Answering yes here allows up to 8 valid codes in a moving four minute window. By answering no, you limit it to 3 valid codes in a 1:30 minute rolling window. Unless you find issues with the 1:30 minute window, answering no is the more secure choice.
Output
If the computer that you are logging into isn't hardened against brute-force login attempts, you can enable rate-limiting for the authentication module. By default, this limits attackers to no more than 3 login attempts every 30s. Do you want to enable rate-limiting (y/n) y
Rate limiting means a remote attacker can only attempt a certain number of guesses before being blocked. If you haven't previously configured rate limiting directly into SSH, doing so now is a great hardening technique.
Note: Once you finish this setup, if you want to back up your secret key, you can copy the ~/.google-authenticator file to a trusted location. From there, you can deploy it on additional systems or redeploy it after a backup.


Now that Google's PAM is installed and configured, the next step is to configure SSH to use your TOTP key. We'll need to tell SSH about the PAM and then configure SSH to use it.

Step 2 — Configuring OpenSSH

Because we'll be making SSH changes over SSH, it's important to never close your initial SSH connection. Instead, open a second SSH session to do testing. This is to avoid locking yourself out of your server if there was a mistake in your SSH configuration. Once everything works, then you can safely close any sessions.
To begin we'll edit the sshd configuration file. Here, we're using nano, which isn't installed on CentOS by default. You can install it with sudo yum install nano, or use your favorite alternative text editor.
  • sudo nano /etc/pam.d/sshd
Add the following line to the bottom of the file.
/etc/pam.d/sshd
. . .
# Used with polkit to reauthorize users in remote sessions
-session   optional     pam_reauthorize.so prepare
auth required pam_google_authenticator.so nullok
The nullok word at the end of the last line tells the PAM that this authentication method is optional. This allows users without a OATH-TOTP token to still log in using their SSH key. Once all users have an OATH-TOTP token, you can remove nullok from this line to make MFA mandatory.
Save and close the file.
Next, we'll configure SSH to support this kind of authentication. Open the SSH configuration file for editing.
  • sudo nano /etc/ssh/sshd_config
Look for ChallengeResponseAuthentication lines. Comment out the no line and uncomment the noline.
/etc/ssh/sshd_config
. . .
# Change to no to disable s/key passwords
ChallengeResponseAuthentication yes
#ChallengeResponseAuthentication no
. . .
Save and close the file, then restart SSH to reload the configuration files. Restarting the sshd service won't close open connections, so you won't risk locking yourself out with this command.
  • sudo systemctl restart sshd.service
To test that everything's working so far, open another terminal and try logging in over SSH. If you've previously created an SSH key and are using it, you'll notice you didn't have to type in your user's password or the MFA verification code. This is because an SSH key overrides all other authentication options by default. Otherwise, you should have gotten a password and verification code prompt.
If you want to make sure the what you've done so far works, in your open SSH session navigate to ~/.ssh/ and rename the authorized_keys file, temporarily, and open a new session and log in with our password and verification code.
  • cd ~/.ssh
  • mv authorized_keys authorized_keys.bak
Once you've verified that the your TOTP token works rename the 'authorized_keys.bak' file back to what it was.
  • mv authorized_keys.bak authorized_keys
Next, we need to enable an SSH key as one factor and the verification code as a second and tell SSH which factors to use and prevent the SSH key from overriding all other types.

Step 3 — Making SSH Aware of MFA

Reopen the sshd configuration file.
  • sudo nano /etc/ssh/sshd_config
Add the following line at the bottom of the file. This tells SSH which authentication methods are required. This line tells SSH we need a SSH key and either a password or a verification code (or all three).
/etc/ssh/sshd_config
. . .
# Added by DigitalOcean build process
ClientAliveInterval 120
ClientAliveCountMax 2
AuthenticationMethods publickey,password publickey,keyboard-interactive
Save and close the file.
Next, open the PAM sshd configuration file again.
  • sudo nano /etc/pam.d/sshd
Find the line auth substack password-auth towards the top of the file. Comment it out by adding a #character as the first character on the line. This tells PAM not to prompt for a password.
/etc/pam.d/sshd
. . .
#auth       substack     password-auth
. . .
Save and close the file, then restart SSH.
  • sudo systemctl restart sshd.service
Now try logging into the server again with a different session. Unlike last time, SSH should ask for your verification code. Upon entering it, you'll be logged in. Even though you don't see any indication that your SSH key was used, your login attempt used two factors. If you want to verify, you can add -v (for verbose) after the SSH command:
Example SSH output\
. . . debug1: Authentications that can continue: publickey debug1: Next authentication method: publickey debug1: Offering RSA public key: /Users/sammy/.ssh/id_rsa debug1: Server accepts key: pkalg ssh-rsa blen 279 Authenticated with partial success. debug1: Authentications that can continue: keyboard-interactive debug1: Next authentication method: keyboard-interactive Verification code:
Towards the end of the output, you'll see where SSH uses your SSH key and then asks for the verification code. You can now log in over SSH with a SSH key and a one-time password. If you want to enforce all three authentication types, you can follow the next step.

Step 4 — Adding a Third Factor (Optional)

In Step 3, we listed the approved types of authentication in the sshd_config file:
  1. publickey (SSH key)
  2. password publickey (password)
  3. keyboard-interactive (verification code)
Although we listed three different factors, with the options we've chosen so far, they only allow for an SSH key and the verification code. If you'd like to have all three factors (SSH key, password, and verification code), one quick change will enable all three.
Open the PAM sshd configuration file.
  • sudo nano /etc/pam.d/sshd
Locate the line you commented out previously, #auth substack password-auth, and uncomment the line by removing the # character. Save and close the file. Now once again, restart SSH.
  • sudo systemctl restart sshd.service
By enabling the option auth substack password-auth, PAM will now prompt for a password in addition the checking for an SSH key and asking for a verification code, which we had working previously. Now we can use something we know (password) and two different types of things we have (SSH key and verification code) over two different channels.
So far, this article has outlined how to enable MFA with an SSH key and a time-based one-time password. If this is all you need, you can end here. However, this isn't the only way to do multi-factor authentication. Below are a couple additional ways of using this PAM module for multi-factor authentication and some tips and tricks for recovery, automated usage, and more.

Tip 1 — Recovering Access

As with any system that you harden and secure, you become responsible for managing that security. In this case, that means not losing your SSH key or your TOTP secret key and making sure you have access to your TOTP app. However, sometimes things happen, and you can lose control of the keys or apps you need to get in.

Losing an SSH Key or TOTP Secret Key

If you lose your SSH key or TOTP secret key, recovery can be broken up into a couple of steps. The first is getting back in without knowing the verification code and the second is finding the secret key or regenerating it for normal MFA login.
To get in after losing the secret key that generates the verification code on a DigitalOcean Droplet, you can simply use the virtual console from your dashboard to log in using your username and password.
Otherwise, you'll need an administrative user that has sudo access; make sure not to enable MFA for this user, but use just an SSH key. If you or another user loses their secret key and can't log in, then the administrative user can log in and help recover or regenerate the key for any user using sudo.
Once you're logged in, there are two ways to help get the TOTP secret:
  1. Recover the existing key
  2. Generate a new key
In each user's home directory, the secret key and Google Authenticator settings are saved in ~/.google-authenticator. The very first line of this file is a secret key. A quick way to get the key is to execute the following command, which displays the first line of the google-authenticator file (i.e. the secret key). Then, take that secret key and manually type it into a TOTP app.
  • head -n 1 /home/sammy/.google_authenticator
If there is a reason not to use the existing key (for example, being unable to easily share the secret key with the impacted user securely or the existing key was compromised), you can remove the ~/.google-authenticator file outright. This will allow the user to log in again using only a single factor, assuming you haven't enforced MFA by removing 'nullok' in the '/etc/pam.d/sshd' file. They can then run google-authenticator to generate a new key.

Losing Access to the TOTP App

If you need to log in to your server but don't have access to your TOTP app to get your verification code, you can still log in using the recovery codes that were displayed when you first created your secret key. Note that these recovery codes are one-time use. Once one is used to log in, it cannot be used as a verification code again.

Tip 2 — Changing Authentication Settings

If you want to change your MFA settings after the initial configuration, instead of generating a new configuration with the updated settings, you can just edit the ~/.google-authenticator file. This file is laid out in the following manner:
.google-authenticator layout
<secret key>
<options>
<recovery codes>
Options that are set in this file have a line in the options section; if you answered "no" to a particular option during the initial setup, the corresponding line is excluded from the file.
Here are the changes you can make to this file:
  • To enable sequential codes instead of time based codes, change the line " TOTP_AUTH to " HOTP_COUNTER 1.
  • To allow multiple uses of a single code, remove the line " DISALLOW_REUSE.
  • To extend the code expiration window to 4 minutes, add the line " WINDOW_SIZE 17.
  • To disable multiple failed logins (rate limiting), remove the line " RATE_LIMIT 3 30.
  • To change the threshold of rate limiting, find the line " RATE_LIMIT 3 30 and adjust the numbers. The 3 in the original indicates the number of attempts over a period of time, and the 30 indicates the period of time in seconds.
  • To disable the use of recovery codes, remove the five 8 digit codes at bottom of the file.

Tip 3 — Avoiding MFA for Some Accounts

There may be a situation in which a single user or a few service accounts (i.e. accounts used by applications, not humans) need SSH access without MFA enabled. For example, some applications that use SSH, like some FTP clients, may not support MFA. If an application doesn't have a way to request the verification code, the request may get stuck until the SSH connection times out.
As long as a couple of options in /etc/pam.d/sshd are set correctly, you can control which factors are used on a user-by-user basis.
To allow MFA for some accounts and SSH key only for others, make sure the following settings in /etc/pam.d/sshd are active.
/etc/pam.d/sshd
#%PAM-1.0
auth       required     pam_sepermit.so
#auth       substack     password-auth

. . .

# Used with polkit to reauthorize users in remote sessions
-session   optional     pam_reauthorize.so prepare
auth       required      pam_google_authenticator.so nullok
Here, auth substack password-auth is commented out because passwords need to be disabled. MFA cannot be forced if some accounts are meant to have MFA disabled, so leave the nullok option on the final line.
After setting this configuration, simply run google-authenticator as any users that need MFA, and don't run it for users where only SSH keys will be used.

Tip 4 — Automating Setup with Configuration Management

Many system administrators use configuration management tools, like Puppet, Chef, or Ansible, to manage their systems. If you want to use a system like this to install set up a secret key when a new user's account is created, there is a method to do that.
google-authenticator supports command line switches to set all the options in a single, non-interactive command. To see all the options, you can type google-authenticator --help. Below is the command that would set everything up as outlined in Step 1:
  • google-authenticator -t -d -f -r 3 -R 30 -W
This answers all the questions we answered manually, saves it to a file, and then outputs the secret key, QR code, and recovery codes. (If you add the flag -q, then there won't be any output.) If you do use this command in an automated fashion, make sure to capture the secret key and/or recovery codes and make them available to the user.

Tip 5 — Forcing MFA for All Users

If you want to force MFA for all users even on the first login, or if you would prefer not to rely on your users to generate their own keys, there's an easy way to handle this. You can simply use the same .google-authenticator file for each user, as there's no user-specific data stored in the file.
To do this, after the configuration file is initially created, a privileged user needs to copy the file to the root of every home directory and change its permissions to the appropriate user. You can also copy the file to /etc/skel/ so it's automatically copied over to a new user's home directory upon creation.
Warning: This can be a security risk because everyone is sharing the same second factor. This means that if it's leaked, it's as if every user had only one factor. Take this into consideration if you want to use this approach.


Another method to force the creation of a user's secret key is to use a bash script that:
  1. Creates a TOTP token,
  2. Prompts them to download the Google Authenticator app and scan the QR code that will be displayed, and
  3. Runs the google-authenticator application for them after checking if the .google-authenticator file already exists.
To make sure the script runs when a user logs in, you can name it .bash_login and place it at the root of their home directory.

Conclusion

That said, by having two factors (an SSH key + MFA token) across two channels (your computer + your phone), you've made it very difficult for an outside agent to brute force their way into your machine via SSH and greatly increased the security of your machine.
Read More