Welcome to the Course
Welcome to the Linux Fundamentals course! Whether you are new to Linux or looking to deepen your knowledge, this course will guide you through essential concepts and practical skills to equip you with the tools necessary to become proficient in Linux system administration.
Linux is one of the most widely used operating systems in the world, powering everything from personal computers and web servers to embedded systems and smartphones. Its open-source nature, robust performance, and flexibility make it the preferred choice for developers, system administrators, and IT professionals. Learning Linux can open the door to many career opportunities in software development, system administration, cloud computing, and cybersecurity.
What Will You Learn?We’ll start by introducing you to Linux and its distributions, the installation process, and the Linux file system structure. You’ll also get hands-on practice with essential commands and text editors to help you navigate and manage your system effectively.
Once you’re comfortable with the basics, we’ll dive into file operations, search techniques, shell scripting, process management, and networking. You’ll learn to automate tasks, manage processes, and troubleshoot network issues.
As you move forward, you’ll learn key concepts around system security, managing users and groups, backups and recovery strategies, and automation. You’ll also develop shell scripting skills and apply them to real-world administration tasks.
In the final phase of the course, we explore the cloud environment, containerization with Docker, virtualization, and DevOps tools like Kubernetes, Jenkins, and Ansible. You’ll also learn how to customize your shell environment and work with advanced tools that are vital in today’s IT landscape.
Course StructureDay 2: Installation and Setup
Day 3: Linux File System Structure
Day 1: Advanced File Operations
I am excited to have you on this learning journey. Each week, you’ll have exercises, quizzes, and real-world projects to practice and reinforce your learning. Whether you're aiming to get better at linux for a hobby or start a career in this area. this course will equip you with the skills you need to succeed.
Throughout the course there are exercises to complete, with some i have given step by step on how to complete with others i have not because i think doing research on the internet is good practice. Along with exercises there are quiz's to complete and there is a section for the answers. I have added laymen's terms for some of the subjects matters. I have done this for the sections i think are harder to understand.
I have also got some videos for some of the topics in the course i will leave the link for my youtube channal here: https://www.youtube.com/watch?v=bGfrM7kuO-0&list=PLcHMVqvFpPegTIQSAoqTUpn-mJUS-igT6
Ready to dive into Linux? Let’s get started!
The history of Linux begins with the creation of Unix in the late 1960s and early 1970s at AT&T's Bell Labs. Unix was a pioneering operating system, emphasizing simplicity, portability, and multi-user capabilities. Its design and source code greatly influenced the development of many subsequent operating systems.
In 1983, Richard Stallman initiated the GNU Project, aiming to create a free Unix-like operating system. The GNU Project successfully developed many crucial components, including compilers, libraries, and utilities. However, it lacked a working kernel.
In 1991, Linus Torvalds, a Finnish computer science student, started a personal project to create a free operating system kernel. On August 25, 1991, he announced his project on the MINIX newsgroup, seeking collaboration and feedback. Torvalds released the first version of the Linux kernel (version 0.01) on September 17, 1991. Initially, it was just a hobby, but the project quickly gained attention and support from developers worldwide.
By 1992, the Linux kernel was licensed under the GNU General Public License (GPL), which facilitated its growth as developers could freely use, modify, and distribute the code. This combination of the Linux kernel and GNU components formed a complete, functional operating system.
In 1993, Ian Murdock founded the Debian Project, which aimed to create a robust and stable Linux distribution. Debian's commitment to free software and its democratic governance model made it a significant and influential project.
sudo apt update
This command updates the list of available packages and their versions. It doesn't install or upgrade any packages; it just fetches information about the latest versions of the packages from the repositories that are configured on your system.
When you run sudo apt update, the package manager checks the URLs listed in the sources (e.g., /etc/apt/sources.list) and downloads the latest package lists from the repositories. This ensures that your system is aware of the latest versions of the software that is available.
sudo apt upgradeThis command upgrades all the installed packages on your system to the newest available versions, based on the package lists retrieved by sudo apt update. When you run sudo apt upgrade, the package manager compares the versions of the installed packages with the versions available in the updated package lists. If newer versions are found, the package manager will download and install them. This is done without removing any packages or installing new ones, unless necessary to satisfy dependencies.
You can run these commands togeather: sudo apt update && sudo apt upgrade. A reboot may be necessary after after updates and upgrade.
Repositories (or "repos") are storage locations where software packages are stored and maintained. Think of them as online warehouses that contain a large collection of software that can be installed on your system. Each repository typically holds packages for a specific distribution or version of a Linux operating system.
When you install or update software on a Linux system using a package manager (like apt on Debian-based systems), the package manager downloads the necessary files from these repositories. Repositories are defined in the system’s configuration files (usually located in /etc/apt/sources.list or /etc/apt/sources.list.d/ for Debian-based systems). These files tell the package manager where to look for software packages.
Dependencies are libraries or packages that a piece of software requires to function properly. For example, if you want to install a program, it might depend on certain libraries being present on your system. When you install software using a package manager, it automatically checks for these dependencies and installs them if they are not already present.
Dependencies ensure that the software you install runs smoothly by having all the necessary components in place. For example, a photo-editing program might depend on certain image processing libraries. The package manager resolves these dependencies by installing the required packages. If there’s a conflict or a missing dependency, the installation might fail, or the software might not work correctly.
Sometimes, dependencies have their own dependencies. This creates a dependency chain. The package manager takes care of these chains, ensuring all necessary components are installed.
Top right you have internet, sound and power buttons. Click on this and you will bring up a box with more options, you have the option of dark and light mode. Power balance or power saver. Internet button, you can turn this on or off or enter more options. The sound bar can be adjusted left for down and right for up. Top left, screen recorder and screen shots. On the right to this is the settings button this will bring a whole host of settings we will go over this in a moment. Over to the right of the settings we have the lock screen button, this will lock the screen, to get back in enter your password. To the right of this we have the power button you can logout, suspend, restart or shutdown.
Manages wireless and wired network connections. Here, you can connect to available Wi-Fi networks, view saved networks, and configure network settings like IP addresses (static or DHCP). It also allows you to toggle Wi-Fi on or off.
Controls Bluetooth settings for connecting to Bluetooth devices such as headphones, speakers, keyboards, mice, and other peripherals. You can pair new devices, manage paired devices, and toggle Bluetooth on or off.
Manages display settings for monitors and screens. You can adjust screen resolution, orientation, and refresh rate. It also includes options for configuring multiple monitors, screen scaling, and display arrangement.
Manages sound settings, including output devices (speakers, headphones), input devices (microphones), and system volume. It also allows you to configure sound effects, input/output levels, and manage audio profiles.
Configures power management settings, including screen brightness, screen dimming, sleep, and suspend options. It also allows you to manage power settings for battery and plugged-in modes, and control the behavior when the lid is closed.
Manages multitasking and window management settings. You can configure how workspaces are used, enable or disable hot corners, and adjust window snapping behavior.
Allows you to change the desktop wallpaper and lock screen background. You can choose from pre-installed backgrounds or set a custom image.
Manages installed applications. You can control application permissions, set default applications for different file types, and configure startup applications.
Manages system notifications. You can control how and when notifications are displayed for different applications and system events. It allows you to enable or disable notifications for specific apps.
Configures the behavior of the system search function. You can select which applications and file types are included in search results, and toggle the search feature on or off for certain categories.
Allows you to add and manage online accounts (e.g., Google, Microsoft, Nextcloud) that integrate with your system. These accounts can be used for email, calendar, contacts, file storage, and other services.
Configures sharing options for your system, such as enabling screen sharing, file sharing, and remote login. You can set up and manage services that allow others to access your system or shared resources.
Configures mouse and touchpad settings, such as pointer speed, button configuration (left or right-handed), touchpad gestures, and tap-to-click options.
Manages keyboard settings, including layout, shortcuts, repeat rate, and delay. You can configure keyboard layouts for different languages and set custom shortcuts for various system actions.
Manages color profiles for connected displays and printers. This setting is useful for color calibration and ensuring accurate color reproduction on your monitor and in printed materials.
Manages connected printers. You can add new printers, remove existing ones, and configure printer settings such as default paper size and print quality.
Provides settings to assist users with disabilities. It includes options for screen readers, magnification, high contrast themes, on-screen keyboard, and other assistive technologies.
Controls privacy settings such as location services, usage history, screen lock settings, and data collection preferences. It also includes options to clear recent files and other sensitive data.
Displays system information, including the Ubuntu version, device name, hardware specifications, and available updates. It also provides links for system support and legal information.
Configures date and time settings. You can set the time zone, enable or disable automatic time synchronization, and manually adjust the date and time.
Manages user accounts on the system. You can add, remove, and modify user accounts, change passwords, and configure user permissions (e.g., administrator vs. standard user).
Allow for remote access. Use a remote desktop app to connect using the RDP protocol. You will need to provide additional information to connect.
First go to https://ubuntu.com/download. Here You will download the ubuntu desktop version. When the download is finished you will have it in your downloads as an ios file.
In order for me to get vmware i had to sign up to a website called Broadcom, you may have to do this too or use virtualbox. At the top once you are signed in next to your name click on the drop down menu, Vmware Cloud Foundation. This should take you to a dashboard. On the left click on My Downloads, you should find vmware workstation 17 pro here. Then download the correct one for your machine, this should be the one for personal use.
In the directory where you vmware download is you need to give it permissions, we will go over permissions later on in the course, for now follow along by typing:
When i installed vmware i did come across some issues with VMMON AND VMNET. If you also have issues with these follow along with this.
Download vmware-host-modules-workstation-17.5.1 from github. Even thought i have vmware 17.5.2 only 17.5.1 was available on github, but this still worked. Type:
If you have this issue, this should fix it.
Now to power on vmware sudo vmware. If you find vmware in your programs and you start it up by clicking on the vmware player it may not work after you set up your machine so i recommend typing sudo vmware.
You can follow these steps:
The ubuntu virtual machine should start.
Installing Ubuntu on a USB Drive using balenaEtcher
Open balenaEtcher on your computer after installation.
You will see a simple interface with three main steps:
Once the flashing and verification process is complete, balenaEtcher will notify you that the process has finished successfully. Safely eject your USB drive from your computer.
Understanding the directory structure of Ubuntu (and Linux in general) is key to navigating and managing the system. The Linux filesystem follows the Filesystem Hierarchy Standard (FHS), which defines the locations of important directories and their contents. The root directory (/) is the base of the entire filesystem. All other directories and files branch off from this root. It is owned by the root user, and only administrators (via sudo) can make changes to this directory.
Essential command binaries (executables) required for the system to operate in single-user mode and to boot. These commands are accessible to all users. These commands are for the basic functionality of the system, and they are used early during the boot process. These binaries are necessary for both the system administrator and regular users to perform fundamental tasks.
Examples:Basic commands like ls, cp, mv, rm, cat, bash.
Files required to boot the system, including the kernel (vmlinuz), the initial ramdisk image (initrd.img), and the boot loader configuration files (like GRUB). This directory holds everything needed to load the Linux kernel and start the boot process.
Examples:System.map-
Files that represent hardware devices. These are not actual files but special files that act as interfaces to hardware (e.g., hard drives, USB devices, input devices). Provides access to hardware devices through the filesystem.
Examples:sda (hard drive), tty (terminal devices), null, random, zero.
System-wide configuration files and shell scripts that are used during startup and by different services.
Examples:Network configuration files (/etc/network/interfaces), user account and password files (/etc/passwd, /etc/shadow), service startup scripts (/etc/init.d/).
This stores personal directories for each user on the system.
Examples:/home/john, /home/sarah, etc. Each user has their own directory where they store personal files, settings, and data.
Essential shared libraries (similar to Windows DLLs) needed by binaries in /bin and /sbin. Provides essential library files required for system programs to run.
Examples:libc.so, libm.so, and kernel modules in /lib/modules/.
Every Linux file system includes a lost+found directory. In the event of a file system crash, a file system check is performed during the next boot. Any corrupted files detected are placed in the lost+found directory, allowing you to attempt data recovery.
Mount points for removable media like USB drives, CDs, and external hard drives. Temporarily mounts removable media when connected to the system.
Examples:If you insert a USB drive, it might get mounted to /media/username/usbname.
Empty directory used for temporarily mounting filesystems, often used by system administrators. Provides a standard location for temporarily mounting external filesystems for system maintenance.
Third-party or optional software that is not part of the standard Linux package management system. On install it may place files into /opt/application folder.
Examples:Proprietary software or packages installed outside of the package manager, like Google Chrome or Oracle Java.
Virtual filesystem that provides an interface to kernel data structures. It contains information about the system and running processes. The /proc directory is similar to the /dev directory in that it doesn’t contain standard files. Instead, it holds special files that represent system and process information.
Examples:/proc/cpuinfo (CPU information), /proc/meminfo (memory usage), /proc/
The home directory for the root user. Provides a separate home directory for the root user, as they don't use the /home directory like normal users.
Data that is used by applications and services during runtime. This directory is typically cleared during boot. Stores volatile runtime data for processes that need to communicate with one another. These files can't be stored in /tmp because files in /tmp may be deleted.
Examples:Process IDs (PID files), lock files, socket files.
Essential system binaries, similar to /bin, but primarily for system administration tasks that require superuser (root) privileges. Provides essential binaries for system maintenance and administrative tasks.
Examples:fsck, reboot, shutdown, mkfs.
Data used by services offered by the system, such as web servers or FTP servers.
Examples:Web server data (/srv/www), FTP data (/srv/ftp).
Provides a virtual filesystem for exposing kernel and device data. Similar to /proc, it allows access to kernel information. Access to information about hardware devices, kernel modules, and system settings.
Examples:/sys/class/, /sys/devices/.
Applications save temporary files in the /tmp directory. These files are typically removed when the system is rebooted and can also be deleted at any time by utilities like systemd-tmpfiles.
Another directory that isn’t part of the Filesystem Hierarchy Standard (FHS) but is commonly found in todays system is /snap. It stores installed Snap packages and related files associated with Snap. While Ubuntu uses Snaps by default, this directory may not be present on distributions that do not support or use Snap packages.
The /usr directory houses applications and files intended for user use, as opposed to those used by the system. For instance, non-essential applications are stored in the /usr/bin directory rather than /bin, and non-essential system administration binaries are placed in /usr/sbin instead of /sbin. Corresponding libraries are located in the /usr/lib directory. Additionally, the /usr directory includes other subdirectories, such as /usr/share, which contains architecture-independent files like graphics. The /usr/local directory is the default installation location for locally compiled applications. This separation helps prevent these applications from interfering with the rest of the system.
/var stands for "variable" and is a directory in the Linux file system where variable data is stored. Variable data refers to data that changes frequently, such as logs, caches, and temporary files. /var is designed to store data that is written to by the system or applications, and is therefore subject to change.
In this section we will cover some of the basic commands used throughout of the linux system.
This linux command is used to change the current directory you are in.
Usage:Lists the contents of a directory.
Usage:Displays the full path of the current directory you are in.
Usage: pwd
Copies files or directories from one location to another.
Usage:Moves or renames files or directories.
Usage:mv source destination: Move a file or directory to another location or rename it.
Examples:Deletes files or directories.
Usage:Creates an empty file or updates the timestamp of an existing file.
Usage: touch newfilename Examples:Creates a new directory.
Usage:Wildcards can help manage multiple files efficiently:
Copy all .txt files from one directory to another: cp *.txt /path/to/destination/
Example:cp *.txt /home/john/Documents
Here are 10 exercises that you can do to practice navigating the file system and performing file operations in Linux:
Practice moving between directories.
Steps:Create directories and practice navigating them.
Steps:Use different options of the ls command to view directory contents.
Steps:Practice creating files.
Steps:Copy files from one location to another.
Steps:Move and rename files.
Steps:Remove files and directories.
Steps:Practice using pwd to find out where you are in the filesystem.
Steps:Create, view, and remove hidden files.
Steps:Create and manipulate nested directories.
Steps:These exercises will help you build confidence in navigating directories and performing common file operations on a Linux system.
Linux comes with several text editors that allow users to create and edit text files directly from the command line. The two most popular ones are nano and vim. Each editor has its own strengths, and learning how to use them will greatly improve your ability to manage configuration files and scripts.
1. Nano: A Beginner-Friendly Text EditorNano is a simple, easy-to-use text editor that is ideal for beginners. It provides a user-friendly interface with a helpful command list at the bottom of the screen. You can edit configuration files, scripts, or write documents without needing to learn complex commands.
Vim is a highly configurable and powerful text editor. It has a steeper learning curve compared to nano, but it provides advanced functionality for efficient text manipulation. Vim operates in different modes:
This is similer to nano type: vim filename.txt
I personally use nano when using editors, it is easy to use, but i think vim can be easy once you start using it. Vim is considered more advanced then nano. By practicing with both editors, you'll become comfortable with editing files in a variety of environments and workflows.
Learn to create, edit, and save files using Nano.
Practice searching, cutting, and pasting text within a file.
Search for the word "practice":
Once you've found the word, cut the entire line containing "This is a practice exercise.":
Move to the top of the file (using the arrow keys) and paste the line at the top:
Save and exit the file:
Get familiar with navigating and editing a file in Vim.
Steps:Press i to enter Insert Mode, then type the following tasks:
Press Esc to return to Normal Mode.
Save the file and exit Vim:Practice using deletion, undo, and search features in Vim.
In Normal Mode, delete the second line (the one that says "Complete Linux homework"):
Undo the deletion:
Search for the word "Exercise":
Once you've found the word, move to Insert Mode (press i) and add "outdoors" after "Exercise". Save and exit:
These exercises will help you get comfortable with both nano and vim, building skills for basic text editing, navigation, and file manipulation in the Linux environment.
Week one of our Linux course covers essential concepts and skills to build a solid foundation in Linux. Let’s recap each major topic and review the key takeaways:
Background: Linux, created by Linus Torvalds in 1991, was inspired by Unix, a powerful operating system used in academic and corporate settings. Linux grew with the help of the open-source community, evolving into a robust OS for servers, desktops, and embedded systems.
Distributions (Distros): Linux comes in different flavors (distributions) like Ubuntu, Fedora, Debian, CentOS, and Arch Linux. Each has unique features, package managers, and use cases:
Linux's open-source nature led to a wide variety of distributions tailored for different users and needs.
Setting up Linux involves choosing a distribution, creating installation media, and configuring the system.
Creating Installation Media: You can use tools like Balena Etcher to make a bootable USB drive from an ISO file. Installation Process: This involves selecting disk partitions, choosing a username, password, and configuring system settings.
Installing Linux is straightforward with tools and guided installers, allowing you to personalize your setup for optimal performance.
The Linux file system is organized in a hierarchical directory structure, beginning from the root directory (/). Key directories include:
Understanding the Linux directory structure is crucial for navigating and managing files effectively.
Learning a few core commands helps you navigate and manipulate files in Linux:
Familiarity with basic commands helps you perform day-to-day tasks and manage the file system from the command line.
Text editors in Linux allow you to create and edit files directly from the terminal. Two popular editors are:
Nano:Nano and Vim gives you flexibility in editing system files and scripts, an essential skill for Linux administration.
Below is a 10-question quiz to test knowledge from Week 1 of the course. Each question has 4 multiple-choice options, with the correct answer in the answers section.
Answers:
Permissions and Ownership (chmod, chown)
Linux file types serve not only as classifications but also influence how files function within the system. Regular files store data, while directories, visually similar to folders, help manage and organize these files. Symbolic links act as pointers or shortcuts to other files or directories. Block and character devices correspond to hardware, with block devices allowing random-access I/O operations and character devices supporting sequential-access I/O. Sockets facilitate communication between processes, and pipes enable sequential data flow between them.
File permissions in Linux are a key security mechanism, providing detailed control over who can access or modify files. These permissions can be combined to offer different levels of access. For instance, read permission lets a user view the file's content, write permission allows modifications, and execute permission enables the file to be run as a program. In the case of directories, execute permission permits users to list or access the directory's contents, with slightly different effects than for files.
In Linux, ownership plays a key role in resource sharing and managing permissions. Every file is linked to a user (the owner) and a group. The user is typically the file's creator or the one who last modified it, while groups consist of multiple users who share access rights. Knowing how to modify file ownership and group associations using the chown and chgrp commands is essential for controlling access to files and directories effectively.
In Linux, every file and directory has three sets of permissions for three categories of users:
Permissions are represented as:
Each file's permissions are shown as a series of 10 characters:
The first character represents the type of file (- for regular files, d for directories), and the remaining nine characters represent the read, write, and execute permissions for the owner, group, and others.
The chmod command is used to change the permissions of a file or directory.
Give the owner execute permissions:
Remove write permissions for the group:
Each permission is represented by a number:
r = 4, w = 2, x = 1.
You can sum these numbers to set permissions:
To give the owner full permissions (rwx), group read and execute (r-x), and others read-only (r--):
The chown command is used to change the ownership of a file or directory.
Change the owner of file.txt to john:
Change both the owner and the group:
The chgrp command can be used to change the group ownership of a file:
Now, imagine there are three groups of people who can come to your house (use your file):
So, when we talk about file permissions, we're talking about who can read, write, or execute files. Each file has permissions set for you (owner), your group of friends, and everyone else.
Let’s say you have a file called security.txt. You want:
The permissions could be like this:
-rw-r----- means:
Now, let’s talk about ownership. Just like a house has an owner, every file on your computer has someone who "owns" it. The owner of a file is usually the person who created it, but you can also change who owns a file if you want.
There are two parts to ownership:
Let’s say you create a file called game.txt, and you’re the owner. But now, you want to let your friend Emma be the new owner. You can do this by changing the ownership using the chown command.
Command:
What happens:
You can also change the group (your group of friends) who can access the file. Let’s say you have a group called school that can access the file
Command:
What happens:
Imagine you have a treasure map file, and you want:
Here’s how it would look:
This number, 740, breaks down like this:
Inodes are a distinctive element of the Linux file system, storing metadata about files separately from the actual file data. This metadata includes details such as the file's size, ownership, permissions, and the disk locations of the file’s data blocks.
Features of Hard Links:
A symbolic link (also called a soft link or symlink) is a pointer to another file or directory. Unlike hard links, symlinks can point to files on different file systems, but they break if the target file is deleted.
Creating a Symbolic Link:
Example:
Features of Symbolic Links:
Users can effectively manage files, control access, and create links for better file organization in a Linux environment.
Imagine you have a watch. Now, let’s say you could duplicate the watch so that there are two identical ones, but both toys are connected. It doesn’t matter which watch you were — they’re exactly the same, and if one gets scratched or broken, the other one does too.
This is what a hard link does in the computer! When you create a hard link, you’re making another name for a file, and both names point to the same file data. If you change the file through either name, the changes appear everywhere because it’s really the same file with different names.
Example:You have a file called security.txt. Now, you want to make a hard link to this file, so it has a second name, but it’s still the same file.
Command:
Now, both security.txt and new_security.txt are the same file! If you open new_security.txt and make changes, the same changes will show up in security.txt, because they are connected.
If you delete security.txt, don’t worry! new_security.txt still works because the actual file data is still there.
Now, imagine you want to give your friend a map to your favorite secret spot in the park (your file). Instead of making another version of the spot (like a hard link), you just give them a shortcut to find it. If the spot moves (or you delete it), your map will lead to nothing because the place is gone.
This is what a symbolic link (or symlink) does! It’s like a shortcut on your computer that points to another file or folder. If the original file is deleted or moved, the symlink breaks and no longer works.
You have a file called book.txt in your Documents folder, but you want to access it from your Desktop. Instead of moving the file, you can create a symbolic link (a shortcut) on the Desktop that points to the file in Documents.
Command:ln -s /home/user/Documents/book.txt /home/user/Desktop/book_link.txt
Now, book_link.txt on your Desktop is just a pointer to the real book.txt in your Documents folder. If you open book_link.txt, it takes you to the real book.txt. But if you delete or move book.txt, the shortcut stops working.
Comparing Hard Links and Symbolic Links
1. Hard Links:Hard Link Example:
Now, if you open backup_security.txt and write "Done!" in it, you will also see "Done!" in security.txt, because it’s really the same file.
2. Symbolic Links (Symlinks):Symbolic Link Example:
Hard Links are like having multiple names for the same file. If one name is deleted, the file still exists under the other name. Changes made to the file through any link affect the same file data.
Symbolic Links are like shortcuts that point to a file. If the original file is deleted or moved, the symlink doesn’t work anymore because it’s just a pointer.
The find command is like a treasure hunt! You can use it to search for files and directories based on different criteria like name, size, date modified, etc.
find /path/to/search -name "filename"
Find a file by name in your home folder:
Find all text files in a folder:
find / -size +50M
Find files modified in the last 7 days:
This finds files that have been changed in the last 7 days.
The locate command is like using a search engine for your files. It works super fast because it searches through a pre-built database of all your files, rather than checking each folder in real-time like find does.
Basic Syntax:
Find all .png files:
The locate command depends on a database, so if you’ve added or removed files recently, you might want to update the database with:
The grep command is like using Ctrl+F in a document, but for your whole system! It searches inside files for specific text or patterns.
Basic Syntax:
Find a word inside a specific file:
Search through multiple files:
Search recursively through directories:
The which command is used to find where a program is installed or located. This is useful if you want to know where the system is keeping your command or program.
Basic Syntax:
Example:
Find where the python3 executable is located:
This will return something like /usr/bin/python3, showing you where the Python program is located.
The whereis command not only tells you where a program is located but also shows you its manuals and source code (if available).
Basic Syntax:
Example:
Find where the bash program and its related files are located:
whereis bash
Create a directory called practice_search in your home folder:
Move into the directory:
Create three text files named apple.txt, banana.txt, and cherry.txt using touch:
Use find to locate the file banana.txt inside the practice_search directory.
Create two more files: notes1.txt and notes2.txt in the practice_search directory:
Use find to search for all files that have the word "notes" in their name.
Create a large file (let’s say 10 MB) inside the practice_search directory:
Use find to search for files larger than 1 MB in size inside the practice_search directory.
Install the locate command (if it’s not already installed):
Update the database:
Use locate to search for files containing the word "apple" anywhere on your system.
Add some content to the file apple.txt:
Add some content to the file banana.txt:
Use grep to find the word "yellow" in the files inside the practice_search directory.
Inside the practice_search directory, create a subdirectory called subfolder:
Create a new text file called fruit_info.txt inside subfolder:
Add some content to fruit_info.txt:
echo "Cherries are small and red." > subfolder/fruit_info.txt
Use grep to search for the word "red" in all files within the practice_search directory and its subdirectories:
find . -name "*.txt" -exec grep "yellow" {} \;
Create a file called linkfile.txt in the practice_search directory:
Create a symbolic link to linkfile.txt inside subfolder:
Use find to search for symbolic links in the practice_search directory:
Modify the apple.txt file:
Use find to search for files modified in the last 1 day (24 hours) inside the practice_search directory:
Use which to find where the bash shell is installed on your system:
Use whereis to find out where the python3 program and its manual are located:
Every shell script starts with a shebang (#!), which tells the system which interpreter (or shell) to use to run the script. Most common shells are bash, sh, and zsh. For bash, the first line of the script should be:
This tells the system to use the bash shell to run the script.
After the shebang, you can write any normal Linux commands just like you would in the terminal. Each command is written on a new line. For example, a script that prints "Hello, World!" to the terminal and lists files in a directory would look like this:
#!/bin/bash
echo "Hello, World!"
ls
Example:
echo "Hello, World!": Prints "Hello, World!" to the terminal.
ls: Lists the files in the current directory.
You can define variables in a shell script to store values. Variables don’t need a type declaration, and there should be no spaces around the = sign.
To use the value stored in a variable, prefix it with $:
#!/bin/bash
name="John"
echo "Hello, $name!"
Example:
name="John": Defines a variable called name and sets its value to "John".
echo "Hello, $name!": Prints "Hello, John!" to the terminal.
You can prompt the user for input using the read command. For example:
#!/bin/bash
echo "What's your name?"
read name
echo "Hello, $name!"
read name: Takes input from the user and stores it in the variable name.
If the user types "Alice", it would print "Hello, Alice!".
You can use if-else statements to run code based on conditions. Here’s the syntax:
if [ condition ]
then
# Commands to run if condition is true
else
# Commands to run if condition is false
fi
Example:
#!/bin/bash
echo "Enter a number:"
read num
if [ $num -gt 10 ]; then
echo "Your number is greater than 10!"
else
echo "Your number is 10 or less."
fi
-gt: Greater than comparison.
if [ $num -gt 10 ]: Checks if the value of num is greater than 10.
Other comparison operators include:
Loops allow you to repeat commands. Two common loops are the for loop and the while loop.
For Loop:
The for loop runs a set of commands for each item in a list:
for item in item1 item2 item3
do
echo "Item: $item"
done
Example:
#!/bin/bash
for fruit in apple banana cherry
do
echo "I like $fruit!"
done
This script would print:
I like apple!
I like banana!
I like cherry!
While Loop:
The while loop runs commands as long as a condition is true:
#!/bin/bash
count=1
while [ $count -le 5 ]
do
echo "Count: $count"
count=$((count + 1))
done
This script prints the numbers 1 to 5.
You can define functions in shell scripts to group a set of commands that can be reused. Functions are defined using the following syntax:
function_name() {
# Commands
}
Example:
#!/bin/bash
greet() {
echo "Hello, $1!"
}
greet "John"
greet "Alice"
$1: The first argument passed to the function. In this case, "John" and "Alice" are passed when calling the function.
The output will be:
Hello, John!
Hello, Alice!
Anything after # on a line is a comment. Comments are ignored by the shell and are useful for explaining your code.
#!/bin/bash
# This is a comment
echo "This will run"
Before you can run your script, you need to give it execute permissions using the chmod command:
Then, you can run your script like this:
./script.sh
Every command in a shell script returns an exit status. 0 means success, and any non-zero value means failure. You can check the exit status of the last command with $?:
#!/bin/bash
ls /nonexistent_directory
echo "The exit status is $?"
If the directory doesn’t exist, the exit status will be a non-zero value.
Here’s a simple script that puts it all together:
#!/bin/bash
# This is a simple shell script example
# Ask the user for their name
echo "What's your name?"
read name
# Greet the user
echo "Hello, $name!"
# Create a new directory and a file
mkdir myfolder
touch myfolder/myfile.txt
# Use a loop to print numbers
for i in 1 2 3 4 5
do
echo "Number: $i"
done
# Goodbye message
echo "Goodbye, $name!"
Shell scripting is a powerful way to automate tasks, and understanding the basic syntax allows you to create useful scripts for a variety of purposes.
Have a go at some of theses exercises. You can use the internet for research.
Create a script that:
Prompts the user for their name.
Prints a message saying "Hello, [name]!".
Extra: Modify the script to say "Goodbye, [name]!" before it exits.
Hint: Use read to get user input, and echo to display the message.
Write a script that:
Asks the user to enter two numbers.
Asks the user to choose an operation (+, -, *, /).
Performs the chosen operation and displays the result.
Hint: Use read to take input, and if statements or case statements to handle different operations.
Create a script that:
Prompts the user to create a directory.
Moves into the newly created directory using cd.
Creates a file inside the directory with touch and writes some text into the file using echo.
Lists all files in the directory using ls.
Hint: Use mkdir, cd, touch, and echo to perform operations on directories and files.
Write a script that:
Asks the user to enter a number.
Checks whether the number is even or odd.
Prints the result.
Hint: Use the modulo operator (%) and an if statement to determine if the number is even or odd.
Write a script that:
Asks the user to input a number.
Counts down from that number to zero, printing each number.
When the countdown reaches zero, prints "Time's up!".
Hint: Use a while loop to decrement the number until it reaches zero.
Write a script that:
Prompts the user for a file extension (e.g., .txt).
Searches the current directory and its subdirectories for files with that extension using find.
Displays the found files.
Hint: Use read to capture the file extension and find . -name "*.extension" to search for files.
Write a script that:
Greets the user with "Good Morning", "Good Afternoon", or "Good Evening" depending on the current time of day.
Then prints a message like "Hello, [name]! Have a nice day!" based on the user’s input.
Hint: Use the date command to get the current time and if-else to determine the greeting.
Write a script that:
Asks the user to input a number.
Checks if the number is prime (i.e., divisible only by 1 and itself).
Displays whether the number is prime or not.
Hint: Use a for loop to check divisibility from 2 up to the number’s square root.
Write a script that:
Creates a backup of a given directory.
Asks the user to input the directory path.
Copies the directory to a location called backup_[date], where [date] is the current date.
Prints a message indicating the backup was successful.
Hint: Use cp -r to copy directories and the date command to append the current date to the backup folder name.
Write a script that:
Defines a function called greet that takes a name as an argument and prints "Hello, [name]!".
Prompts the user for their name, and then calls the greet function with the user's name as the argument.
Hint: Use a function declaration to define the greet function and pass the user’s name as an argument to the function.
Write a script that:
Defines functions for add, subtract, multiply, and divide.
Prompts the user for two numbers and an operation.
Calls the appropriate function based on the user’s input and displays the result.
Hint: Use read for inputs, and create functions for each operation.
Understanding processes and jobs is essential in Linux as it allows you to manage the programs and tasks running on your system. Here’s an explanation of processes, jobs, and key commands like ps, top, kill, bg, fg, and jobs.
A process is simply an instance of a running program. Every time you execute a command or run an application, it creates a process. Each process has a unique identifier called a PID (Process ID).
Processes can be in different states:
A job refers to a process that is associated with your current terminal session. Jobs can run in the foreground (actively using the terminal) or in the background (running behind the scenes, allowing you to continue using the terminal).
The ps command is used to display information about running processes. It shows details such as the PID, terminal, and running time of each process.
This shows the processes running in your current terminal session.
More detailed output:
This shows all processes running on the system, including system-level processes, with details like user, CPU, memory usage, etc.
The top command displays a real-time, dynamic view of the system’s processes. It shows which processes are using the most CPU and memory, and it updates live.
Usage:
In top, you’ll see columns for:
You can press:
The kill command sends a signal to a process, most often to terminate (kill) it. You use the PID of the process to specify which one to kill.
Usage:
You can specify different signals to send:
Example:
This sends a polite termination signal to the process with PID 1234.
If the process doesn’t terminate, you can force it with:
When you run a command in the terminal, it normally runs in the foreground, meaning you can’t use the terminal for anything else until the command finishes. If you pause a job (using Ctrl+Z), you can resume it in the background using bg.
Usage:
Example:
Run a command, for example:
It will take 100 seconds to complete, and you can’t use the terminal for anything else.
Pause it by pressing Ctrl+Z. The terminal will output something like:
Send it to the background using:
Now the job will continue running in the background, and you can use the terminal for other tasks.
If you have a job running in the background, you can bring it back to the foreground with the fg command.
Usage:
Example:
To bring the background job (e.g., sleep 100) back to the foreground:
If you have only one background job, you can just use:
The jobs command lists all the jobs associated with your terminal session, both background and stopped jobs.
Usage:
This will output something like:
In this case:
You're running a backup script that copies large directories, and it’s taking longer than expected. You need to check its status and continue using the terminal for other tasks while the backup runs.
Start the backup script:
The terminal is now occupied by the script.
Pause the backup process (if you realize you need the terminal for something else):
Press Ctrl+Z to stop the script temporarily:
Send the script to the background so it continues running while you work on other things:
The job is now running in the background.
List the background jobs to check the status:
Output:
Bring the job back to the foreground if you want to check on the progress:
Now the script is running in the foreground again.
You notice your system has become slow, and you want to find out which process is consuming the most CPU or memory. You might need to terminate that process.
Use top to see real-time system resource usage:
In the top output, you see that a process (e.g., myprogram) is using too much CPU.
Press k in top to kill the process:
Enter the PID (let’s say it's 1234).
Use signal 15 to gracefully terminate it, or use 9 to forcefully kill it if it's not responding:
Verify that the process is terminated by using ps:
If myprogram no longer appears, it’s been successfully terminated.
You need to run multiple tasks (like downloading files, running scripts, etc.) simultaneously but don’t want them to block your terminal. You can run them in the background.
Start a download using wget:
Pause the download using Ctrl+Z:
Move it to the background so it continues downloading:
Start another task (e.g., copying a large directory):
Pause and background this second task using Ctrl+Z and bg:
List the running jobs to see both tasks running in the background:
Output:
Bring any job back to the foreground when you want to monitor it:
A graphical application (e.g., a text editor or browser) has frozen, and you want to kill and restart it.
Find the process ID of the frozen application using ps (let's say it's gedit):
Kill the process using the PID:
If the application doesn’t respond to a normal termination, force kill it:
Restart the application after it’s been terminated:
A program you’ve run (myapp) has spawned multiple instances, and you want to kill them all.
Use ps to find all instances of myapp:
ps aux | grep myapp
Output:
Kill all instances using a loop:
This command will kill all processes that match the name myapp.
Verify that all instances are terminated using ps:
Understanding network configuration is crucial for working with Linux systems, whether you're setting up a home server, connecting to a remote machine, or troubleshooting network issues. This involves configuring network interfaces, IP addresses, gateways, and more. Here’s a brief guide to basic network configuration in Linux and common networking commands.
Network configuration on Linux can be handled manually or via tools. Depending on the distribution, configuration files might differ slightly:
Debian/Ubuntu:Network interfaces are often defined in /etc/network/interfaces or via the NetworkManager GUI or CLI.
Red Hat/CentOS/Fedora:Network configuration can be found in /etc/sysconfig/network-scripts/ and controlled by NetworkManager.
Common Networking Commands
Let’s dive into some essential networking commands and what they do.
The ping command tests the network connection between your system and a remote server. It works by sending ICMP (Internet Control Message Protocol) "echo requests" to the target and waiting for a response.
Usage:
Example:
This sends packets to Google's server to check if it's reachable.
Common options:
ifconfig is the traditional tool to configure and view network interfaces, though it is being replaced by the more modern ip command.
ifconfig (old method):
Usage:
This displays all active network interfaces along with their IP addresses, MAC addresses, and network statistics.
To bring an interface up (enable it):
To bring an interface down (disable it):
ip (modern method):
Usage:
This command shows all IP addresses and interfaces.
To display only IPv4 addresses:
To bring an interface up or down:
To assign a new IP address to an interface:
netstat shows network connections, routing tables, interface statistics, masquerade connections, and multicast memberships. It's useful for diagnosing network issues and monitoring connections.
netstat (old method):
Usage:
This command shows:
ss (modern method):
Usage:
This command works similarly to netstat but with better performance for displaying socket information.
The ssh command allows you to securely connect to a remote system over the network using the SSH protocol. It’s essential for managing servers or remote machines.
Usage:
Example:
This connects to the machine at 192.168.1.50 with the username user.
To use a custom port:
To copy files securely between two machines, you can use scp:
Example:
The traceroute command traces the path that packets take from your computer to a destination server. It shows the various routers (hops) the packets pass through.
Usage:
Example:
This shows the routers between your machine and Google's server, including their IP addresses and response times.
These commands are used to query DNS (Domain Name System) to resolve domain names into IP addresses or vice versa.
Usage:
Example:
This resolves google.com to its corresponding IP address.
dig:
Usage:
Example:
You suspect that your internet connection is down. You can use ping to check:
If you see replies, your internet connection is working. If you don’t, it could be a network issue.
You need to know the IP address of your machine. Use ip addr to view the IP configuration:
You need to manage a remote server, so you use ssh to connect:
You're troubleshooting network performance issues and want to check which ports are open and listening:
You're experiencing slow access to a website and want to see where the delay is happening:
These commands help you diagnose network issues, set up remote access, and configure network interfaces on your machine.
The curl (Client URL) command is primarily used for transferring data across networks and supports multiple protocols like HTTP, FTP, and IMAP, among others. It's often favored in automation tasks because it is designed to operate without user interaction and can be used for endpoint testing, debugging, and error logging.
Since curl is not pre-installed on most systems, on Debian-based distributions, you can install it using the following command:
Usage:
While downloading large files, the progress bar can be used, and you can do the same with curl using -# option.
mtr combines the functionalities of both ping and traceroute utilities, making it ideal for network diagnostics by providing a real-time view of network response and connectivity. To use mtr, simply append a domain name or IP address, and it will generate a live traceroute report.
Usage:
The whois command helps you retrieve information about registered domains, IP addresses, name servers, and more, as it acts as a client for the whois directory service. This utility may not be pre-installed on your system, but on Ubuntu-based distributions, you can install it using the following command:
Usage:
You can use an ip address instead of hostname.
iftop (Interface TOP) is commonly used by administrators to monitor bandwidth usage and can also serve as a diagnostic tool for network issues. This utility is not pre-installed and requires manual installation. On Ubuntu systems, you can install it using the following command:
Usage:
this will tell you what options you can use with it.
tcpdump is a packet-sniffing and analysis tool used to capture, analyze, and filter network traffic. It can also serve as a security tool by saving captured data in pcap format, which can later be reviewed using Wireshark. Similar to other utilities, tcpdump is not pre-installed. If you're using an Ubuntu-based system, you can install it using the following command:
Usage:
you can capture packets using this command:
you can look in the captured packets with -r tag.
As the name implies, the ethtool utility is mainly used for managing Ethernet devices. It allows you to adjust settings like network card speed, auto-negotiation, and more. However, it may not come pre-installed on your system. To install it on an Ubuntu-based machine, you can use the following command:
usage:
As a straightforward yet powerful network troubleshooting tool, nmcli is often one of the first utilities a sysadmin turns to for diagnosing network issues. It can also be used within scripts for automation. To monitor the connectivity status of devices, you can use the nmcli command as shown:
usage:
nmap is a tool used for network exploration and security auditing. It is popular among hackers and security enthusiasts as it provides detailed, real-time information about the network, connected IPs, port scanning, and more. To install the nmap utility on Ubuntu-based distributions, use the following command:
usage:
bmon is an open-source tool used to monitor real-time bandwidth and debug network issues by displaying statistics in a user-friendly format. One of its standout features is its graphical presentation, and it even allows output in HTML. Installation is easy, as bmon is available in the default repositories of popular Linux distributions, including Ubuntu.
usage:
Managing firewalls is a critical aspect of network security, and firewalld allows you to add, configure, and remove firewall rules efficiently. However, firewalld is not pre-installed and requires manual installation. On Ubuntu-based distributions, you can install it using the following command:
Usage:
In week two, we expanded on foundational skills by delving into advanced file operations, powerful search tools, shell scripting basics, process management, and networking essentials. Here’s a summary of what we covered and the main takeaways for each area.
Building on basic file manipulation, we explored more complex file operations and tools to manage files efficiently.
Permissions and Ownership: Files and directories in Linux have permissions that control who can read, write, and execute them.
Links:
Managing file permissions and understanding the difference between hard and soft links is crucial for secure and organized file management.
Linux provides powerful commands for searching files and text patterns across directories.
find: Locates files based on criteria like name, type, size, and modification date.
locate: Quickly finds files by name using an indexed database.
grep: Searches for patterns within files. Supports regular expressions, making it versatile for text search.
Knowing how to search for files and patterns enables efficient file management, troubleshooting, and data discovery.
Shell scripting is a powerful way to automate tasks and enhance productivity in Linux.
Basic Syntax:
Creating and Running Scripts:
Shell scripts automate repetitive tasks and simplify complex workflows, a fundamental skill for Linux administrators.
Processes represent running programs. Knowing how to manage processes is essential for troubleshooting and optimizing system performance.
Effective process management is vital for keeping a Linux system running smoothly and responding to issues like unresponsive programs.
Understanding basic networking commands helps you troubleshoot connectivity issues and manage network settings.
With basic networking commands allows you to diagnose network issues, view IP configurations, and securely connect to remote systems.
a) mv
b) touch
c) cp
d) cat
a) Finds and displays disk usage statistics
b) Creates a new directory
c) Displays the contents of a file
d) Searches for files or directories based on specific criteria
a) To compress files
b) To search for patterns in text
c) To display memory usage
d) To move files
a) //
b) --
c) /* */
d) #
a) Lists running processes
b) Displays file permissions
c) Moves files between directories
d) Displays system memory usage
a) bg
b) jobs
c) fg
d) kill
a) top
b) ps
c) kill
d) jobs
a) Grants read, write, and execute permissions to everyone
b) Grants full permissions to the owner, and read/execute permissions to others
c) Deletes the file named filename
d) Grants read and execute permissions to the owner only
a) ping
b) ssh
c) ifconfig
d) netstat
a) Changes to the root directory
b) Moves up one directory level (to the parent directory)
c) Creates a new directory
d) Deletes the current directory
Answer: c) cp
Answer: d) Searches for files or directories based on specific criteria
Answer: b) To search for patterns in text
Answer: d) #
Answer: a) Lists running processes
Answer: c) fg
Answer: a) top
Answer: b) Grants full permissions to the owner, and read/execute permissions to others
Answer: a) ping
Answer: b) Moves up one directory level (to the parent directory)
A package manager is a tool that automates the process of installing, upgrading, configuring, and removing software on a computer. On Linux systems, package managers are crucial for maintaining the software ecosystem, ensuring that users can easily install applications, libraries, and utilities, along with their dependencies, from a central repository.
What Are Package Managers For?Package managers are essential for several reasons:
1. Centralized Software Management: Without a package manager, users would need to download, compile, and install software manually from source code, which is time-consuming and prone to errors. A package manager centralizes this process, offering a consistent and user-friendly way to manage software.
2. Automatic Dependency Resolution: When installing software manually, one might overlook necessary dependencies, leading to errors or failed installations. A package manager automatically resolves these dependencies, downloading and installing everything needed for the software to work correctly.
3. Version Control: Package managers keep track of installed software versions and allow users to easily upgrade, downgrade, or switch between different versions of an application. This is especially important for software that relies on specific versions of libraries or frameworks.
4. Security and Stability: Since package managers pull software from trusted, well-maintained repositories, they help ensure that the software is secure and stable. The repositories are usually maintained by the Linux distribution’s developers, and the packages undergo testing before being released to the public.
5. System Integrity: By managing software installations and updates, package managers prevent conflicts between different software packages. For instance, they avoid scenarios where two packages require different versions of the same dependency, which could break functionality.
6. Consistency Across Systems: Package managers ensure consistency across systems. By using the same package manager, users and administrators can replicate software environments on different machines, making it easier to manage multiple systems (especially in enterprise environments).
How Package Managers WorkAt their core, package managers simplify software management by handling tasks that would otherwise be complex and error-prone if done manually. Here's how they generally work:
1. Package Repositories: Package managers connect to online repositories (servers or mirrors) that store collections of software, known as packages. Each package includes not only the application itself but also metadata about dependencies, version numbers, and installation instructions.
2. Package Metadata: Every package contains metadata—information about its dependencies (other software it relies on to function), the version, and its intended system architecture. The package manager uses this metadata to ensure that the right versions of software are installed.
3. Dependencies Management: Many applications depend on other software (called dependencies) to run correctly. The package manager resolves these dependencies automatically, downloading and installing the necessary software alongside the requested package. This prevents the user from having to manually hunt down each required library or tool.
4. Installation: Once a user requests to install a package, the package manager downloads the appropriate files and places them in the correct directories on the system. It also registers the package, so the manager knows it's installed and can manage it in the future.
5. Upgrading Software: Package managers allow users to update all installed software to the latest versions with a single command. When a package is updated in the repository, the package manager downloads the new version and replaces the older one. It may also install new dependencies or remove obsolete ones.
6. Uninstallation and Cleanup: Removing software via a package manager ensures that all the files associated with the application are also removed. This includes binaries, configuration files, and dependencies that are no longer needed. This keeps the system clean and prevents "dependency hell," where unneeded software clogs up the system.
Types of Package ManagersLinux distributions use different package managers depending on their base architecture:
Each package manager works slightly differently, but their goals are the same: to simplify software installation, removal, and management on Linux systems.
Used in Debian-based distributions like Ubuntu and Linux Mint, apt is one of the most widely used package managers.
Common Commands:
Update the package list:
This refreshes the local cache of available packages from repositories, but it doesn't install or upgrade anything yet.
Upgrade all installed packages:
This upgrades all the packages currently installed on your system to their latest available versions.
Install a new package:
Example:
Clean up unnecessary packages:
This removes packages that were installed as dependencies but are no longer needed.
Used in CentOS and older versions of Red Hat Enterprise Linux (RHEL). While it has been mostly replaced by dnf in newer versions, it's still widely used.
Common Commands:
Update all repositories and package lists:
Install a new package:
Example:
Remove a package:
Example:
Update all installed packages:
This upgrades all packages to the latest versions available in the repositories.
dnf is the newer version of yum and is used in newer Fedora, CentOS 8+, and RHEL 8+ distributions. It provides faster dependency resolution and better performance compared to yum.
Common Commands:
Update the package list:
Install a new package:
Example:
Remove a package:
Update all installed packages:
Clean up old packages:
pacman is the package manager used by Arch Linux and Arch-based distributions like Manjaro. It is a powerful tool designed to be simple and efficient.
Common Commands:
Synchronize the package list:
Install a new package:
Example:
Remove a package:
Example:
Upgrade all installed packages:
Remove unneeded packages:
This command removes orphaned packages (those that were installed as dependencies but are no longer needed).
Package Manager Command Function
By automating the handling of dependencies, repositories, and system integration, they remove much of the complexity traditionally associated with maintaining a Linux system.
In Linux, user and group management is fundamental to controlling access, ensuring security, and organizing system resources. Each user has a unique identity, and groups are used to manage collections of users that share similar permissions. By understanding how to create and manage users and groups, you can control who can access files, run programs, and administer the system.
Every user on a Linux system has:
To create a new user, the useradd command is used (or adduser, which is a more user-friendly wrapper in some distributions). After a user is created, you can set their password and assign them to groups.
When a user is created, their home directory is usually set up automatically, and a shell is assigned to them. System administrators can customize these settings depending on the user's role.
Managing UsersAdding, Modifying, and Deleting a User in Linux
To add a new user, use the useradd command, which creates a new user account and associated settings, such as the home directory and shell.
Command:
Example:
Let's say we want to create a user named john.
This creates the user john but does not set up a password or create a home directory. To create a home directory and set the default shell, you can use options like -m (create home directory) and -s (specify the shell).
Setting a Password:
To set a password for the user john, use the passwd command:
This will prompt you to enter and confirm the password. When setting up a new user, you will be prompted to set up password then, but can change it with the command above.
To modify a user, use the usermod command. You can change the user's home directory, add them to groups, or modify other account details.
Command:
Example 1: Change Home Directory
Let's change the home directory of user john to /home/johndoe:
You can also use the -m option to move the contents of the old home directory to the new one.
Example 2: Add User to a Group
To add john to a group (e.g., the sudo group so he can run administrative commands):
-aG: Appends the user to the group (here, the sudo group). The -a is important because without it, the user will be removed from any other groups.
To remove a user, use the userdel command. You can delete just the user account or also remove their home directory and mail spool.
Command:
Example:
To delete user john but leave their home directory:
To delete john and their home directory:
The -r option removes the user’s home directory and mail spool, ensuring no leftover files from the deleted account.
Options for useradd, usermod, and userdel Commands
useradd [options] username
There are a few more options. you can use:
This will list them all. I have add most of them above.
userdel [options] username
Option Description
This will list them all. I have add most of them above.
Once users are created, they can be managed in various ways, such as changing passwords, deleting users, or modifying their permissions and groups.
A group is a collection of users, and it simplifies managing permissions for multiple users. Rather than giving individual users access to specific files or resources, you can assign users to a group, and then grant that group the necessary permissions. Groups make it easier to manage permissions, especially in systems with many users.
Each user belongs to at least one group, which is typically their primary group. Users can also belong to additional secondary groups.
Creating and Managing Groups
To create a new group, the groupadd command is used. You can then add users to the group with the usermod command, assigning them to either their primary or secondary groups.
For example, if you have a development team, you can create a devs group and add all your developers to this group. Then, by giving the devs group access to certain project directories, you automatically grant all members the same level of access.
Adding, Modifying, and Deleting Groups in Linux
Managing groups is an important part of user administration in Linux. Here’s how you can add, modify, and delete groups using basic Linux commands.
The groupadd command is used to create a new group.
Command:
Example:
Let’s say we want to create a group called developers.
Common Options:
The groupmod command allows you to change group details, such as the group’s name or GID.
Command:
Example 1: Change the Group Name
Let’s change the name of the group developers to engineers.
-n newname: Changes the group name.
Let’s change the GID of the group engineers to 1050.
The groupdel command is used to delete an existing group.
Command:
Example:
To delete the engineers group:
You cannot delete a group if it is the primary group of any user. You must change the user’s primary group first before deleting the group.
These are the basic commands for managing groups in Linux. These allow you to efficiently create new groups, change existing ones, and delete groups as needed.
In Linux, system administration tasks (such as installing software or modifying system configurations) require root or superuser access. Rather than giving every user full root privileges (which can be dangerous), Linux uses the sudo command to allow authorized users to perform specific administrative tasks.
What is sudo?sudo (short for "superuser do") is a tool that allows a permitted user to execute a command as the superuser (root) or another user, as specified by the system’s sudoers file.
For example, instead of logging in as the root user to install software, a regular user with sudo privileges can run:
The sudo command temporarily elevates that user’s privileges to complete the task and then returns them to their normal permissions.
The users who can use sudo are controlled by the sudoers file, typically located at /etc/sudoers. This file dictates which users and groups have sudo privileges and what specific commands they can run as root.
To edit this file, the command visudo is used because it checks for syntax errors before saving changes. Incorrectly modifying the sudoers file can lock out administrative access to the system.
The basic structure of the sudoers file includes:
For example, to allow a user named alice to use sudo, you might add the following to the sudoers file:
This means that the user alice can run all commands as any user on any host (typically only relevant in multi-system environments).
Limiting sudo PrivilegesYou can also restrict which commands a user can run with sudo. For instance, if you want bob to only be able to restart the web server, you could add:
This grants bob permission to run only the systemctl restart apache2 command as root, but nothing else.
Best Practices for Using sudoBy carefully managing users, groups, and sudo access, you can maintain a secure and well-organized Linux environment.
In Linux, user account information is stored in the /etc/passwd and /etc/shadow files. These files hold important details about users, including their login names, user IDs, home directories, shells, and more.
Let’s break down what each file does and then explain the structure of the passwd entry.
1. /etc/passwd FileThe /etc/passwd file contains essential information about user accounts. Each line in the file represents a user, and the fields are separated by colons (:).
Structure of /etc/passwd:
Field Breakdown:
This is the username. It’s used to log in, and it’s the unique name for this user on the system.
x (password placeholder):
In older systems, this field used to contain the encrypted password. However, for security reasons, the password is now stored in the /etc/shadow file, and a placeholder x is used here.
(Note: In older versions of Linux, this field used to store the password itself, but this was a security risk since /etc/passwd is world-readable.)
1001 (UID - User ID):
This is the unique User ID (UID) assigned to the user bob. It’s an integer that the system uses to identify the user. For example:
UIDs starting from 1000 (or 1001) are used for regular user accounts.
1001 (GID - Group ID):
This is the Group ID (GID) associated with the user’s primary group. In this case, the primary group for bob has the GID of 1001. The details about the group can be found in the /etc/group file.
"" (comment field):
This is typically used to store a description of the user (e.g., the user's full name). In this case, it's empty, but it could contain something like "bob hope".
/home/bob (home directory):
This is the user’s home directory, where their personal files and configuration files are stored. When bob logs in, they are taken to /home/bob.
/bin/bash (default shell):
This is the user’s default shell, which is the program that runs commands when they log in. In this case, bob uses the Bash shell. Other options could be /bin/sh, /bin/zsh, etc.
The /etc/shadow file contains encrypted password information and additional security-related details about users. This file is readable only by the root user for security purposes.
Structure of /etc/shadow:
Example entry:
Field Breakdown:
bob (username):
$6$randomhashedpassword$... (encrypted password):
This is the user’s encrypted password. The number after the first $ sign represents the encryption algorithm used:
If the password is locked or disabled, this field might contain ! or *.
19001 (last password change):
This is the date of the last password change, stored as the number of days since January 1, 1970 (the UNIX epoch). In this example, the user last changed their password 19,001 days after the epoch.
0 (minimum days):
The minimum number of days required before the user is allowed to change their password again. In this case, it’s 0, meaning the user can change their password anytime.
99999 (maximum days):
The maximum number of days a password can be used before it must be changed. In this case, 99999 days is practically unlimited.
7 (warning period):
The number of days before the password expires that the system will warn the user to change it.
:: (inactive and expiration fields):
These fields specify the number of inactive days before the account is disabled and the expiration date of the account, respectively. Both are empty here, meaning no account expiration or inactivity settings are applied.
/etc/passwd contains basic user information, such as username, UID, GID, home directory, and shell. It’s readable by everyone, but sensitive data (like the password) is stored in /etc/shadow.
/etc/shadow stores the encrypted passwords and additional information about password aging (such as expiration dates).
Example Breakdown
/etc/passwd entry:
/etc/shadow entry:
By understanding the structure of these files, you can manage and troubleshoot user accounts and passwords in Linux systems efficiently.
The sudo command is a powerful tool that allows users to execute commands with elevated privileges. Since this can be a security-sensitive action, it's important to log sudo usage for auditing purposes. In most Linux distributions, sudo commands are logged to specific log files that record details about who ran a command, when it was run, and what command was executed.
Here’s an overview of where sudo log files are located, the kind of information they contain, and why they’re useful.
In most Linux distributions, sudo logs can be found in the system log files, typically located in the /var/log/ directory. The specific file depends on the distribution and its configuration.
Common log file locations:
These files log authentication events, including sudo activity, login attempts, and security-related messages.
The sudo logs capture important information about each sudo command execution. This includes:
Example log entry (from /var/log/auth.log):
Breaking it down:
To view the sudo logs, you can open the relevant log file with a text editor or use the grep command to filter out sudo entries.
Example on Ubuntu:
To view sudo logs in Ubuntu, open /var/log/auth.log:
To specifically filter for sudo commands:
Example on Red Hat-based systems:
On RHEL or CentOS, sudo activity is logged in /var/log/secure:
To view just the sudo entries:
Security Auditing:
The logs help monitor which users are using sudo and what commands they are running. This is critical for auditing purposes, as it provides insight into potential misuse or unauthorized access attempts.
Troubleshooting:
If an administrative command fails or behaves unexpectedly, reviewing sudo logs can help you see if the correct command was issued or if there were permission problems.
Detecting Suspicious Activity:
Logs help detect unusual sudo usage, such as users running unexpected commands, executing commands they don’t normally use, or attempts to elevate privileges when they shouldn’t.
Tracking Remote Access:
Since the logs capture the terminal from which the command was executed (e.g., pts/1), you can trace whether commands were run remotely via SSH or locally. This can help spot remote users executing commands they shouldn’t be.
By default, sudo logs to the system logs (like /var/log/auth.log or /var/log/secure), but you can configure it to log to a different location or with more detailed information if needed.
To configure sudo logging behavior, you can modify the /etc/sudoers file using visudo. You can adjust parameters like:
For example, adding this line to /etc/sudoers logs all commands to a specific log file:
Since /var/log/ files can grow large over time, Linux systems use a log rotation mechanism to manage log sizes. This ensures that old log files are archived and compressed, and new log files are created.
Log rotation is typically managed by the logrotate service, which automatically rotates files like /var/log/auth.log or /var/log/secure. You can configure how often the logs are rotated and how many old logs to keep.
Summary of sudo Log Files:
Location:
Contents: Includes the user who invoked sudo, the command they ran, the time, the terminal, and whether the command was successful or failed.
Useful for:
By understanding and reviewing sudo log files, administrators can keep a close watch on system usage, ensuring security and compliance with system policies.
Here are 10 exercises for you to practice managing users and groups on a Linux virtual machine. These exercises will help you get comfortable with creating, modifying, and deleting users and groups, as well as configuring permissions with sudo.
Create a new user named alice with:
Set a password for alice.
Steps:
Create a new user named bob with:
Steps:
Steps:
Steps:
Create a system user named service_user with:
Steps:
Steps:
Steps:
Delete the user temporary_user, but keep their home directory intact.
Steps:
Delete the user alice and remove her home directory and mail spool.
Steps:
Steps:
Steps:
Understanding how to manage disk partitions and filesystems is essential for administering Linux systems. Disk partitions allow a single disk to be divided into multiple logical storage units, while filesystems determine how data is stored and accessed on each partition. Tools like fdisk, mkfs, and mount are used to manage and work with disk partitions and filesystems in Linux.
Let’s break this down step-by-step, including how each tool is used.
A disk partition is a logical division of a hard disk. A single physical disk can be split into multiple partitions, each of which can contain its own filesystem.
Why Use Partitions?A filesystem is the structure that dictates how data is stored, organized, and retrieved on a partition. Common filesystems in Linux include:
Here are some of the core Linux utilities for managing partitions and filesystems:
fdisk is used to create, delete, modify, and view disk partitions.
Command:
Replace /dev/sdX with your disk name (e.g., /dev/sda for the first disk).
Basic Commands in fdisk:
mkfs: Create Filesystems
Once you have a partition, you need to create a filesystem on it using the mkfs (make filesystem) command.
Command:
mount: Mounting Filesystems
After creating a filesystem, it must be mounted so that the system can access it.
Command:
To unmount a filesystem, use:
Permanent Mounting:
You can configure a partition to be automatically mounted at boot by editing the /etc/fstab file.
lsblk: View Block Devices
lsblk displays a detailed list of available storage devices and their partitions.
Command:
df: Check Disk Space
Command:
Here’s a basic example of how to create, format, and mount a new partition:
Step 1: Check Available Disks
This shows all available disks and partitions. Let's say you see /dev/sda with some free space.
Step 2: Partition the Disk
Use fdisk to create a new partition:
Type n to create a new partition.
Accept the default values (or set custom sizes if you need).
Type w to write changes and exit.
Now that you have a new partition (e.g., /dev/sda3), format it with mkfs:
Mount the partition to a directory (e.g., /mnt/newdata):
Use df or lsblk to verify that the partition is mounted:
df -h
parted: Similar to fdisk, but more powerful and supports GPT partitioning (for disks larger than 2 TB).
parted also allows resizing partitions.
blkid: Shows information about block devices, including UUID and filesystem type.
fsck: Used to check and repair filesystems.
mount -a: Re-mounts all filesystems listed in /etc/fstab.
When adding an entry to /etc/fstab, you can specify how a filesystem should be mounted at boot. Here’s an example entry:
Fields:
Partition Table Types: Older disks use the MBR partitioning scheme, while newer disks (especially those over 2 TB) use the GPT partitioning scheme, supported by tools like parted and gdisk.
Backup Important Data: Before creating, modifying, or deleting partitions, always back up your data. Partitioning operations can result in data loss if done incorrectly.
Unmounting Safely: Always unmount a partition before making changes to avoid data corruption.
Summary
During the installation process, you’ll eventually come to a screen asking how you want to set up the disk. You'll see options like:
Choose “Something else” to manually create partitions.
Before you create the partitions, you should know what partitions are typically needed on a Linux system:
Let’s assume you have a 500 GB hard drive. Here’s a basic setup you might use:
Now, let’s create the partitions one by one. In the partitioning menu, you will see the disk available (e.g., /dev/sda). It will show as unallocated or free space.
1. EFI Partition (if your machine uses UEFI):
This is where the system will store boot information if using UEFI. If you're using BIOS instead of UEFI, you don’t need this partition.
2. Root Partition (/):
Select the remaining free space and click Add.
The root partition will contain the operating system, system files, and applications.
3. Swap Partition:
Select the remaining free space and click Add.
Swap space is used when your system runs out of RAM, acting as temporary memory on the disk.
4. Home Partition (/home):
Select the remaining free space and click Add.
Once you have created all the partitions (EFI, root, swap, and home), review them in the partition table. It should look something like this:
If everything looks correct, proceed by clicking Install Now. The installer will write the partitions to the disk and start copying the files.
Why Manual Partitioning Is Useful:
Monitoring system performance is essential for keeping an eye on how your Linux machine is running—checking things like CPU usage, memory consumption, disk activity, and running processes. Let’s talk about some key commands to monitor performance:
Monitoring System Performance:
The top command is a real-time system monitor that displays running processes and resource usage.
Example of what top displays:
You can sort processes by CPU usage, memory usage, and even kill processes directly from top by pressing the k key followed by the process ID
htop is like top, but with a more colorful and user-friendly interface.
Key features:
The iostat command is used to monitor system input/output device loading by observing the time the devices are active compared to their average transfer rates.
This command is often used by system administrators to detect bottlenecks in disk subsystems.
vmstat provides detailed information about system performance, focusing on processes, memory, paging, block I/O, traps, and CPU activity.
5. Log Files and System Logging:
Log files are like the diaries of your Linux system. They record everything happening on your system, from successful operations to errors, which is useful for troubleshooting, performance monitoring, and system auditing.
1. /var/log/ Directory:
Most system logs are stored in the /var/log/ directory. This is where you’ll find logs for different services, system events, and applications.
Here are some key log files:
2. journalctl (Systemd Logs):
If your Linux system uses systemd (which many modern Linux distros do), the command journalctl is used to access the system logs.
journalctl collects logs from all system services and applications in one place. You can view logs by date, severity, or related to specific services.
Example usage:
3. dmesg (Kernel Ring Buffer):
The dmesg command shows messages from the kernel ring buffer, mostly related to hardware and driver events.
Summary:
Monitoring Performance:
Understanding system performance and logs is key to troubleshooting and optimizing your Linux system!
Common Linux log files names and usage:
Example 1: /var/log/syslog or /var/log/messages
This file contains general system messages, including information from the kernel, boot process, and system services. Here’s an example of what a few lines might look like:
Breaking It Down:
Oct 13 10:30:25: This is the timestamp, telling you when the event occurred (date and time).
my-computer: This is the hostname of the machine generating the log entry.
systemd[1]: This is the process name (systemd in this case) and the process ID (in brackets). systemd is the system and service manager on Linux.
Started Network Manager Script Dispatcher Service: This is the actual message describing what happened. In this case, the Network Manager's Script Dispatcher Service has started.
The gnome-shell error at the end of the log refers to a graphical shell (in this case, GNOME) encountering an OpenGL error (Failed to make current with error GLXBadContext), which could indicate an issue with rendering the desktop environment.
Example 3: /var/log/dmesgBreaking It Down:
This file logs the output of the boot process, showing services and processes that start up when the system boots.
Breaking It Down:
CUPS stands for Common Unix Printing System. It is a modular printing system for Unix-like operating systems (like Linux and macOS) that allows a computer to act as a print server.
In this case, the key message to note is the FAILED entry, which highlights an issue that might need troubleshooting (perhaps the display manager wasn’t configured properly).
Example 5: /var/log/kern.logThis file records kernel-specific messages. Here’s an example:
Breaking It Down:
If you're trying to fix an issue or optimize performance, your system's logs can give you clues about what's happening behind the scenes.
The kernel is the core part of an operating system, responsible for managing the system's hardware, resources, and providing a bridge between applications and the hardware. On Linux, the kernel is the central part of the OS that:
Linux has a monolithic kernel, meaning it includes the core services of the system, including drivers for various hardware devices, all within the kernel space. However, it can also dynamically load and unload functionality, which brings us to kernel modules.
Kernel modules are pieces of code that can be loaded into the kernel at runtime to extend its functionality without needing to reboot or recompile the kernel. They’re typically used for:
Common Commands for Managing Kernel Modules:
Example:
Basic Kernel Configuration
The kernel configuration involves setting up how the kernel should behave or determining which features or modules are compiled into it. When compiling or modifying a Linux kernel, you configure what gets included or excluded.
Key Configuration Areas:
Imagine you plug in a new USB network adapter, but your Linux system doesn’t recognize it right away. If the driver for this device isn’t included in the kernel by default, you may need to load the appropriate kernel module.
Steps:
Identify the hardware: Using dmesg or lspci/lsusb commands to get information about the hardware.
Identify the hardware: Using dmesg or lspci/lsusb commands to get information about the hardware.
Verify it’s loaded: Use lsmod to check if the module is loaded.
Let’s say you connect a new piece of hardware (like a printer or a USB stick). Instead of having the drivers for every possible device loaded into the kernel at all times (which would be inefficient), the kernel can dynamically load the required drivers as needed. The kernel will look for the appropriate kernel module (if it exists) and load it.
For example:
When a device is no longer in use, or if you no longer need certain functionality, you can remove modules to free up system resources. For example, to remove a module for a USB device you no longer use:
For more advanced users, the Linux kernel can be customized and recompiled to suit specific hardware or performance needs. During the configuration process (before compiling the kernel), you can choose which drivers to compile into the kernel or as modules.
For instance:
Summary:
By understanding kernel modules and configuration, you can customize and optimize your Linux system to meet your needs and efficiently manage hardware.
What is the Kernel?
Imagine your computer as a big company. In this company, there are workers (your hardware) like:
Now, running this company efficiently requires a manager who:
This manager is the kernel. It’s the core part of your operating system that manages all the workers (hardware) and makes sure everything runs smoothly.
How Does the Kernel Work?The kernel is in charge of making sure the hardware (CPU, memory, disk, etc.) works together. For example:
Providing a Communication Bridge: The kernel sits between the hardware (the physical machine) and the software (the programs you run, like browsers, music players, etc.). Whenever a program wants to talk to the hardware, it asks the kernel to handle it.
For example:
The kernel is like a manager who doesn’t always need all the tools in the office. Sometimes you might need a specific tool (say, a screwdriver for a specific task), but most of the time, you don’t need it. Instead of cluttering the office with unnecessary tools, the manager can load and unload them as needed.
These tools in the kernel are called kernel modules. They are small pieces of code (like extra drivers or functions) that can be added or removed without having to restart the entire company (the computer).
Example:
Example: How the Kernel Handles a Task
Imagine you want to open a document from your hard drive. Here’s how the kernel manages this:
The Request: You open a word processor like LibreOffice, and ask it to open a document.
Communication: LibreOffice (the software) doesn’t know how to talk directly to the hard drive, so it asks the kernel: "Hey, I need this file from the hard drive."
Hardware Handling: The kernel then tells the CPU: "Go fetch this file from the hard drive."
The kernel coordinates the entire process, ensuring that your hardware and software work together without any issues.
The Kernel in Action: Multiple Jobs at OnceLet’s say you’re listening to music, writing a document, and downloading a file — all at the same time. How does your computer handle all of this?
The kernel is responsible for multitasking. It makes sure the CPU divides its time between each task:
The kernel keeps all these tasks running without the computer crashing or slowing down. It ensures that each task gets the right amount of resources without interfering with others.
Kernel ConfigurationThe kernel can be customized for different needs. For example, you can choose to include or exclude certain features in the kernel, like:
You can even recompile the kernel with specific options based on your system's needs, but this is a more advanced task that experienced users or system administrators usually handle.
Summary:
By understanding how the kernel works, you get a sense of how Linux (or any operating system) manages the complex interactions between your hardware and software!
In week three, we built on foundational Linux skills and moved into essential system administration topics, covering package management, user and group management, disk management, system monitoring, and an introduction to the Linux kernel. Here’s a recap of each section.
Managing software installations, updates, and removals is a critical part of administering any Linux system. Each distribution typically has its package manager:
apt (Advanced Package Tool): Used on Debian-based systems like Ubuntu. Example commands:
Package managers simplify software management by automating installation, updating, and dependency resolution, which makes maintaining a Linux system more efficient.
Understanding users and groups is crucial for system security and access control.
Adding, Modifying, and Deleting Users:
Managing Groups:
Understanding /etc/passwd and /etc/shadow Files:
User and group management is essential for securing a system and ensuring that only authorized individuals have the necessary permissions.
Managing disks, partitions, and file systems helps ensure the efficient and reliable storage of data.
Disk Utilities:
Partition Types:
Disk management, including partitioning and file systems, is essential for effective data organization, resource allocation, and system performance.
Monitoring system performance is essential for keeping Linux systems running smoothly and efficiently.
Performance Monitoring Tools:
Log Files and System Logging:
Regular system monitoring and log review help identify issues, improve performance, and ensure the system runs smoothly.
The kernel is the core component of Linux, responsible for managing hardware, processes, and system resources.
Kernel Modules: Small pieces of code that can be loaded or unloaded into the kernel as needed. Examples include drivers for hardware.
Understanding the Linux kernel and its modules is essential for system customization, troubleshooting, and optimizing hardware performance.
a) yum
b) pacman
c) apt
d) dnf
a) adduser
b) userdel
c) usermod
d) passwd
a) To store user password hashes
b) To store user account information, including usernames and user IDs
c) To store system configuration files
d) To store kernel module information
a) df
b) lsblk
c) du
d) blkid
a) Displays the current file system usage
b) Connects a file system to a directory
c) Creates a new disk partition
d) Formats a disk with a file system
a) htop is a graphical version of top with more features.
b) top is used for file operations, while htop manages processes.
c) htop can only be used with root privileges, while top cannot.
d) There is no difference; they are identical.
a) A type of user-space application
b) A program that runs outside of the Linux kernel
c) A piece of code that extends kernel functionality without rebooting
d) A configuration file for kernel settings
a) lsmod
b) modprobe
c) rmmod
d) insmod
a) Group information
b) Encrypted user password hashes
c) Usernames and home directories
d) Disk partition information
a) vmstat
b) df
c) du
d) lsblk
Answer: c) apt
Answer: a) adduser
Answer: b) To store user account information, including usernames and user IDs
Answer: c) du
Answer: b) Connects a file system to a directory
Answer: a) htop is a graphical version of top with more features.
Answer: c) A piece of code that extends kernel functionality without rebooting
Answer: b) modprobe
Answer: b) Encrypted user password hashes
Answer: a) vmstat
When using a Linux system (or any computer), security is about protecting your system and data from unauthorized access, breaches, or damage. Here are some fundamental security concepts to keep in mind.
User Accounts and Permissions:
Each user has their own account and set of permissions that dictate what they can do on the system (e.g., view files, modify system settings).
It’s important to give users only the access they need. This is called the principle of least privilege.
Authentication:
Ensuring that users are who they say they are. This is usually done with passwords, but can also involve other methods like SSH keys or multi-factor authentication (MFA).
Authorization:
Once a user is authenticated, authorization determines what they can access. For example, even if you're logged in, you might not have permission to modify important system files.
Encryption:
Encrypting data makes it unreadable to unauthorized users. This can be done for data stored on your disk (e.g., encrypted file systems) or data being sent over the internet (e.g., HTTPS).
Updates and Patching:
Regularly updating software is a basic security practice to ensure vulnerabilities are patched. Unpatched software is one of the biggest security risks.
Auditing and Monitoring:
Keeping an eye on logs and system behavior helps detect if anything unusual is happening. Logs contain records of activities like login attempts, program execution, and file access.
A firewall is a security system that controls incoming and outgoing network traffic based on predefined security rules. It acts as a barrier between your system and untrusted networks (like the internet), allowing or blocking traffic based on your configuration.
1. UFW (Uncomplicated Firewall):Example basic rules in UFW:
Allow SSH (used to connect remotely):
Deny all incoming traffic by default:
Enable the firewall:
iptables is the more advanced and flexible tool that sits behind UFW. It works by defining rules that control how packets (data sent over the network) are handled. These rules are organized into tables, each with a specific purpose.
iptables is very powerful but can be complicated for beginners.
There are three basic chains that iptables uses:
Example iptables rules:
Allow incoming traffic on port 80 (HTTP):
Drop all incoming connections by default:
Strong Passwords:
Always use strong, unique passwords for your accounts. A strong password typically includes a mix of letters, numbers, and symbols.
Tools like password managers can help generate and store strong passwords.
Use SSH Instead of Telnet:
When remotely logging into your system, use SSH (Secure Shell) instead of Telnet, as SSH encrypts the connection. Telnet is insecure because it sends data, including your password, in plain text.
Keep Software Up-to-Date:
Regularly update your system to apply security patches. This includes the operating system and installed applications. You can use a package manager like apt or dnf to ensure everything is up-to-date.
Configure the Firewall:
Use UFW or iptables to configure your firewall. You should deny all unnecessary incoming connections and only allow traffic that you explicitly need, like SSH or web traffic.
Disable Root Login:
For security reasons, it's a good idea to disable direct root login, especially over SSH. Instead, use sudo to run administrative commands. This adds an extra layer of protection by requiring user authentication before running important commands.
Use Sudo for Privileged Operations:
Instead of logging in as root, use sudo to run commands that require administrative privileges. This prevents accidental system changes and makes it easier to keep track of who is doing what.
Monitor Log Files:
Regularly check log files for unusual activity, such as failed login attempts or unauthorized access. Important logs can be found in the /var/log directory.
For example, you might check the auth.log file to see login attempts:
cat /var/log/auth.log
Limit User Access:
Only give users access to what they need. If someone doesn’t need access to certain files or commands, don’t give them the permissions.
Use chown and chmod to properly manage file permissions.
Summary:
By keeping these concepts in mind and properly managing your firewall and access controls, you can ensure your Linux system stays safe from many common security threats.
OpenSSL OverviewOpenSSL is a powerful, open-source toolkit widely used for implementing cryptographic functions like encryption, decryption, and secure communications via protocols like TLS (which is used for HTTPS websites). It also helps create and manage certificates, keys, and cryptographic algorithms.
What is OpenSSL Used For?
You can use OpenSSL to encrypt files so that only authorized users can decrypt them. Here's how to encrypt and decrypt a file.
Encrypting a FileChoose an Encryption Algorithm: AES (Advanced Encryption Standard) is one of the most common and secure encryption methods. You can use AES with a 256-bit key for high security.
Encrypt a File Using OpenSSL:
Explanation:
You'll be prompted to create a password. Make sure to remember it, as it will be required to decrypt the file.
Decrypting a FileTo decrypt the file, use the following command:
openssl enc -d -aes-256-cbc -in myfile.enc -out myfile_decrypted.txt
Explanation:
The rest of the command is the same, except the output file is now the decrypted version (myfile_decrypted.txt).
You'll be asked for the password that was used during encryption to decrypt the file.
Generating SSL/TLS Certificates Using OpenSSLOne of the key features of OpenSSL is generating SSL/TLS certificates, which are used to secure websites and communications. This process involves generating a private key, a certificate signing request (CSR), and optionally a self-signed certificate.
Step 1: Generate a Private KeyA private key is used to create the CSR and certificate. Run the following command to create a 2048-bit RSA private key:
openssl genpkey -algorithm RSA -out my_private_key.pem -aes256
Explanation:
The CSR is a file you give to a certificate authority (CA) to get an SSL certificate. It includes details about your domain and organization. If you're creating a self-signed certificate, you can skip this step.
To create the CSR:
openssl req -new -key my_private_key.pem -out my_csr.pem
Explanation:
You'll be asked for details like your Country, State, Organization, and Common Name (domain name).
Step 3: Create a Self-Signed CertificateIf you're not getting your certificate signed by a Certificate Authority (CA), you can create a self-signed certificate for testing or internal use.
To create a self-signed certificate that is valid for 1 year (365 days):
openssl req -x509 -new -key my_private_key.pem -out my_certificate.pem -days 365
Explanation:
When you generate or view a certificate, it contains several fields:
Summary
By learning to use OpenSSL, you can enhance the security of your data and secure your communications effectively.
Create a file named secure.txt in your home directory.
Change the permissions so that only the owner can read and write the file.
Verify the new permissions using the ls -l command.
Steps:
Create a new user named testuser.
Add the user to the sudo group to give them administrative privileges.
Log in as testuser and verify they can run commands with sudo.
Steps:
Install UFW (Uncomplicated Firewall) if it’s not already installed.
Allow incoming SSH connections.
Block all other incoming connections.
Enable the firewall and verify the rules.
Steps:
Use the ps or top command to list running processes.
Identify a process running under your username.
Kill that process using the kill command.
Steps:
Use iptables to block traffic to port 80 (HTTP) on your machine.
Verify that HTTP requests are blocked.
Delete the rule to restore normal traffic.
Steps:
Create a text file named secret.txt containing some sensitive information.
Use OpenSSL to encrypt the file with AES-256.
Decrypt the file and verify the contents are intact.
Steps:
Use OpenSSL to generate a 2048-bit RSA private key.
Secure the private key by encrypting it with AES-256.
View the private key to verify it is encrypted.
Steps:
Generate a new private key for SSL/TLS use.
Create a Certificate Signing Request (CSR).
Generate a self-signed certificate valid for 365 days using OpenSSL.
Steps:
Create a directory with several files in it.
Use the tar command to create an archive of the directory.
Encrypt the archive using OpenSSL.
Steps:
Create a file named checksum.txt.
Use OpenSSL to generate an MD5 checksum for the file.
Modify the file and check if the checksum changes.
Steps:
These exercises provide hands-on practice with key Linux security tools and concepts, including managing file permissions, using OpenSSL for cryptography, and basic firewall and process control. They help solidify knowledge of security essentials and encryption techniques for protecting files and system integrity.
Backing up your data is critical to ensure that you can recover important files in case of hardware failure, accidental deletion, corruption, or other disasters. A good backup strategy will ensure your data is safe, secure, and recoverable.
A complete copy of all the data. This ensures that you have a complete snapshot but can take more time and storage space.
Example: You back up every file on your computer.
Only the data that has changed since the last backup is saved. It is faster and uses less space than a full backup.
Example: After your full backup, you only save files that have been modified or created since the last backup.
Saves changes made since the last full backup. It’s quicker than a full backup but grows in size with each run.
Example: You do a full backup on Sunday, and on Monday you back up all the files that changed since Sunday. On Tuesday, you back up files that changed since Sunday again.
You don’t keep every backup forever. Instead, you keep a few of the latest backups and rotate them. A common strategy is to use the grandfather-father-son approach, where you rotate daily, weekly, and monthly backups.
On-Site: Keeping your backup locally on the same physical premises. This is fast for recovery but vulnerable to local disasters like fires or theft.
Off-Site: Storing a backup somewhere else, such as in the cloud or on a separate location. This increases the safety of your data but might be slower to restore.
Linux has several robust tools for backing up and synchronizing data. Two of the most widely used tools are rsync and tar.
rsync is a command-line utility used to copy and synchronize files and directories efficiently between two locations (either locally or remotely). It’s especially good for incremental backups, as it only transfers changed parts of files, reducing backup time and bandwidth usage.
To back up a directory /home/user/data to a backup location /mnt/backup, you can run:
Explanation:
Network Backup Example:
To back up data to a remote server over SSH:
-e ssh: Specifies to use SSH for the connection.
user@remote:/backup: The remote destination.
rsync Options:
--delete: Remove files from the destination that no longer exist in the source.
rsync -av --delete /home/user/data /mnt/backup
-z: Compresses data during transfer, useful for network backups.
tar is one of the most commonly used tools in Linux for creating compressed archives. Unlike rsync, which is more for file synchronization and copying, tar bundles multiple files into a single archive file and can optionally compress it.
Key Features of tar:
Creating a tar Backup:
You can create a compressed backup of a directory /home/user/data and save it as backup.tar.gz:
Explanation:
Extracting tar Backups:
To restore or extract the tar archive, you can use:
tar -xzvf backup.tar.gz -C /restore/destination
Creating a Backup Strategy Using rsync and tar:
Full Backup with rsync:
Schedule a weekly full backup using rsync:
Incremental Backup with rsync:
You can create daily incremental backups that only copy changes:
Creating tar Archives for Long-Term Storage:
Create a monthly full archive backup of your data using tar for long-term storage:
Restoring Backups:
From rsync: If you need to restore a file or directory, use rsync to copy it back to its original location:
From tar Archive: To restore a specific file or directory from a tar backup:
--wildcards: This allows you to extract specific files by pattern.
By combining rsync for frequent backups and tar for long-term, compressed archives, you can create an effective backup strategy to protect your data.
Create a directory named project and add a few files (file1.txt, file2.txt).
Use rsync to back up the project directory to a new location /backup/project_backup.
Verify that the files are correctly copied.
Steps:Create directory and files:
mkdir ~/project && touch ~/project/file1.txt ~/project/file2.txt
Backup using rsync:
rsync -av ~/project /backup/project_backup
Verify the files:
ls /backup/project_backup
Add a new file file3.txt to the project directory.
Use rsync to update the backup by only copying the new file.
Verify that only file3.txt is added to the backup.
Steps:Add file:
touch ~/project/file3.txt
Run incremental backup:
rsync -av ~/project /backup/project_backup
Verify:
ls /backup/project_backup
Delete file2.txt from the original project directory.
Use rsync with the --delete option to synchronize the backup.
Verify that file2.txt is removed from the backup as well.
Steps:Delete file:
rm ~/project/file2.txt
Sync and delete in backup:
rsync -av --delete ~/project /backup/project_backup
Verify deletion:
ls /backup/project_backup
Create a new directory docs and add files (doc1.txt, doc2.txt).
Create a tar archive of the docs directory named docs_backup.tar.gz.
Verify that the archive is created successfully.
Steps:Archive the directory:
tar -czvf docs_backup.tar.gz ~/docs
Verify archive:
ls docs_backup.tar.gz
Use the docs_backup.tar.gz archive created in the previous exercise.
Extract the archive into a new directory /restore/docs_restored.
Verify that the files have been restored.
Steps:Extract archive:
mkdir -p /restore/docs_restored
tar -xzvf docs_backup.tar.gz -C /restore/docs_restored
Verify files:
ls /restore/docs_restored/docs
Archive and compress the project directory into project_backup.tar.gz.
Use openssl to encrypt the archive using AES-256.
Verify that the archive is encrypted.
Steps:Archive the project:
tar -czvf project_backup.tar.gz ~/project
Encrypt with OpenSSL:
openssl enc -aes-256-cbc -salt -in project_backup.tar.gz -out project_backup_encrypted.tar.gz
Verify:
ls project_backup_encrypted.tar.gz
Use the encrypted archive project_backup_encrypted.tar.gz created in the previous exercise.
Decrypt the archive using openssl.
Extract the decrypted archive and verify the files are intact.
Steps:Decrypt the archive:
openssl enc -d -aes-256-cbc -in project_backup_encrypted.tar.gz -out project_backup_decrypted.tar.gz
Extract the decrypted archive:
tar -xzvf project_backup_decrypted.tar.gz -C ~/restored_project
Verify the files:
ls ~/restored_project
Set up a cron job to automatically back up the project directory every day at 2 AM using rsync.
Verify that the cron job is set up correctly by checking the cron file.
Steps:Edit the crontab:
crontab -e
Add the following line to schedule the backup:
0 2 * * * rsync -av ~/project /backup/project_backup
Verify the cron job:
crontab -l
Create a full backup of the project directory with tar.
Modify one of the files in the project directory.
Create a differential backup that includes only the changed files.
Steps:Full backup:
tar -czvf project_full_backup.tar.gz ~/project
Modify a file:
echo "New Data" >> ~/project/file1.txt
Create differential backup:
tar -czvf project_diff_backup.tar.gz --newer project_full_backup.tar.gz ~/project
Set up rsync to back up the project directory to a remote server.
Verify that the files are correctly copied to the remote server.
Steps:Use rsync to back up to remote:
rsync -av ~/project user@remote:/backup/project_backup
Log in to the remote server and verify:
ssh user@remote
ls /backup/project_backup
Automating tasks in Linux is essential for performing repetitive tasks without manual intervention. Two commonly used tools for scheduling tasks are cron and anacron. Let’s break down how each works and when to use them, as well as the basics of scheduling tasks.
Cron is a time-based job scheduler in Unix-like operating systems. It allows you to automate scripts or commands to run at specified times and intervals. Cron jobs are widely used for routine maintenance, backups, notifications, and more.
How Cron WorksCron Daemon (crond): The cron daemon runs in the background and checks the crontab (cron tables) for scheduled tasks.
Crontab (Cron Tables): A configuration file where you specify the command or script to run, along with the schedule.
Crontab Syntax
Each line in a crontab file represents a job and follows a specific format with five fields:
* * * * * command | | | | | | | | | ----- Day of the week (0 - 7) (0 or 7 is Sunday) | | | ------- Month (1 - 12) | | --------- Day of the month (1 - 31) | ----------- Hour (0 - 23) ------------- Minute (0 - 59)
For example:
Managing Cron Jobs
To edit your crontab file, use:
crontab -e
To list the current scheduled jobs, use:
crontab -l
Use Cases for Cron
Cron Example
You want to run a script every Monday at 6 AM to update your system:
0 6 * * 1 /usr/bin/apt update && /usr/bin/apt upgrade -y
This command updates and upgrades the system every Monday at 6 AM.
Anacron is similar to cron, but it’s designed for systems that are not running 24/7, such as laptops or desktops that might be turned off at the time a cron job was supposed to run. Differences Between Anacron and Cron
Anacron jobs are defined in /etc/anacrontab. Each job specifies how many days can pass between executions, along with a command to run.
The format is as follows:
Anacron Example
Suppose you want to run a backup every 7 days (weekly):
7 10 weekly-backup /path/to/backup.sh
This tells anacron to wait 10 minutes after system startup, then run backup.sh if 7 days have passed since the last time it was executed.
Use Cases for AnacronBoth cron and anacron can be used to schedule tasks, but their use depends on the nature of the system and the task:
For systems that run most of the time but may occasionally shut down, you can combine cron and anacron. For example:
Wildcards and Ranges
Cron also supports special time macros for common schedules:
For example:
@daily /path/to/daily_backup.sh
This runs the daily_backup.sh script every day at midnight.
To ensure that cron jobs are running correctly, Linux systems often log the results of cron jobs to system logs. You can check cron logs using:
or
This helps you verify if a job ran successfully or if there were any errors.
Summary
These tools form the backbone of automated task scheduling in Linux, making them critical for managing system processes and ensuring consistent, timely maintenance.
Create a script hello.sh that prints "Hello, world!" to a file.
Schedule the script to run every 5 minutes using cron.
Verify that the script runs at the scheduled time by checking the output file.
Steps:Create the script:
echo 'echo "Hello, world!" >> ~/hello_output.txt' > ~/hello.sh
chmod +x ~/hello.sh
Schedule it with cron:
crontab -e
Add this line:
*/5 * * * * ~/hello.sh
Verify by checking the output file:
cat ~/hello_output.txt
Create a backup script backup.sh that copies files from ~/project to /backup/project.
Schedule the script to run every day at 2 AM using cron.
Verify that the backup occurs.
Steps:Create the script:
echo 'rsync -av ~/project /backup/project' > ~/backup.sh
chmod +x ~/backup.sh
Schedule with cron:
crontab -e
Add this line:
0 2 * * * ~/backup.sh
Write a script that updates the system (apt update and apt upgrade).
Use the @weekly special time macro to schedule the script to run once every week.
Steps:Create the script:
echo 'sudo apt update && sudo apt upgrade -y' > ~/system_update.sh
chmod +x ~/system_update.sh
Schedule using @weekly:
crontab -e
Add this line:
@weekly ~/system_update.sh
Create a script log.sh that appends the current date and time to a log file.
Schedule it to run every day between 8 AM and 6 PM, every 30 minutes.
Steps:Create the script:
echo 'date >> ~/log_output.txt' > ~/log.sh
chmod +x ~/log.sh
Schedule with a time range:
crontab -e
Add this line:
*/30 8-18 * * * ~/log.sh
Write a script startup_message.sh that writes "System has started!" to a file.
Use the @reboot cron option to run the script every time the system reboots.
Verify that the script runs on reboot.
Steps:Create the script:
echo 'echo "System has started!" >> ~/startup_log.txt' > ~/startup_message.sh
chmod +x ~/startup_message.sh
Schedule using @reboot:
crontab -e
Add this line:
@reboot ~/startup_message.sh
Write a script daily_cleanup.sh that deletes files older than 7 days from ~/Downloads.
Schedule it to run daily using anacron, with a 10-minute delay after system boot.
Verify that the script runs when the system starts.
Steps:Create the script:
echo 'find ~/Downloads -type f -mtime +7 -exec rm {} \;' > ~/daily_cleanup.sh
chmod +x ~/daily_cleanup.sh
Add the task to /etc/anacrontab:
sudo nano /etc/anacrontab
Add this line:
1 10 daily-cleanup ~/daily_cleanup.sh
Write a script weekly_backup.sh that backs up your home directory to /backup/home_backup.
Schedule it to run every week using anacron, with a 15-minute delay after system boot.
Steps:Create the script:
echo 'rsync -av ~/ /backup/home_backup' > ~/weekly_backup.sh
chmod +x ~/weekly_backup.sh
Add the task to /etc/anacrontab:
sudo nano /etc/anacrontab
Add this line:
7 15 weekly-backup ~/weekly_backup.sh
Schedule a cron job to run a script that does not exist (non_existent.sh).
Check the cron log to see the error messages related to the missing file.
Correct the cron job to run an actual script.
Steps:Schedule the broken job:
crontab -e
Add this line:
* * * * * ~/non_existent.sh
Wait a few minutes, then check the cron log:
grep cron /var/log/syslog
Correct the cron job to a valid script.
Schedule a cron job that logs the current system uptime to a file every 10 minutes.
Check the cron logs to confirm that the job runs successfully.
Steps:Create the script:
echo 'uptime >> ~/uptime_log.txt' > ~/log_uptime.sh
chmod +x ~/log_uptime.sh
Schedule the cron job:
crontab -e
Add this line:
*/10 * * * * ~/log_uptime.sh
View the cron logs to confirm:
grep cron /var/log/syslog
Write a script health_report.sh that gathers system information (disk usage, memory usage, CPU load) and saves it to a report file.
Schedule it to run daily at 1 AM.
Verify that the reports are being generated.
Steps:Create the script:
echo 'df -h > ~/system_health.txt && free -h >> ~/system_health.txt && uptime >> ~/system_health.txt' > ~/health_report.sh
chmod +x ~/health_report.sh
Schedule the cron job:
crontab -e
Add this line:
0 1 * * * ~/health_report.sh
After mastering basic shell scripting, the next step is learning advanced scripting techniques to handle more complex tasks. This includes using control structures, functions, arrays, and more to write efficient, reusable, and maintainable scripts. Script debugging and optimization are essential for identifying and resolving errors and improving the performance and readability of scripts.
Functions allow you to reuse blocks of code without repetition, making your scripts cleaner and more modular. They are especially useful when the same code needs to be executed multiple times.
Syntax:
function_name() { # commands }
Example:
#!/bin/bash
# Define a function to check disk usage
check_disk_usage() {
df -h
}
# Call the function
check_disk_usage
Arrays in shell scripting allow you to store multiple values in a single variable. Arrays can be indexed by numbers and are useful when handling lists of data (e.g., file paths or user inputs).
Syntax:
array_name=(element1 element2 element3)
Example:
#!/bin/bash
# Define an array of file names
files=("file1.txt" "file2.txt" "file3.txt")
# Loop through array elements and print them
for file in "${files[@]}"; do
echo "Processing $file"
done
Loops allow you to execute commands repeatedly based on a condition. In addition to the basic for loop, you can use while and until loops to control the flow of your script based on conditions.
For Loop:
for i in {1..5}; do
echo "Number: $i"
done
While Loop:
count=1
while [ $count -le 5 ]; do
echo "Count: $count"
count=$((count + 1))
done
Until Loop:
count=5
until [ $count -lt 1 ]; do
echo "Countdown: $count"
count=$((count - 1))
done
Control structures such as if, else, and case help make decisions in scripts based on conditions.
If-Else Statement:
#!/bin/bash
# Check if a file exists
if [ -f "/homw/harrycallahan/loop1.sh" ]; then
echo "File exists."
else
echo "File does not exist."
fi
Case Statement:
A case statement is useful when you need to choose between several options.
#!/bin/bash
# Determine the day of the week
day=$(date +%A)
case $day in
Monday)
echo "It's Monday, start of the week!"
;;
Friday)
echo "It's Friday, almost weekend!"
;;
*)
echo "It's $day."
;;
esac
Advanced scripts often require redirecting input and output. Redirection can send the output of a command to a file or use a file as input.
Standard Output (stdout) Redirection:
command > output.txt # Overwrites output.txt
command >> output.txt # Appends to output.txt
Standard Input (stdin) Redirection:
Standard Error (stderr) Redirection:
Combine stdout and stderr:
Even seasoned scripters face errors in their code. Debugging is the process of identifying and fixing these errors. Shell scripting has built-in mechanisms to help you trace, log, and debug scripts.
a. Enabling Debug ModeYou can enable debug mode in a script to show each command and its result as it executes.
To debug an entire script, add the -x option to the shebang:
Or run the script with:
The set command allows you to control shell options within the script. Some helpful options for debugging include:
Example:
#!/bin/bash
set -x # Enable debug mode
file="/path/to/file"
if [ -f "$file" ]; then
echo "File exists"
else
echo "File not found"
fi
The trap command allows you to capture signals or errors and execute custom commands when they occur. This is useful for cleaning up temporary files or printing messages before a script exits.
Syntax:
trap 'commands' SIGNAL
Example:
#!/bin/bash
trap 'echo "An error occurred. Exiting..."; exit 1;' ERR
# Simulate an error
ls /nonexistent/file
Use logging to track what your script is doing. You can use echo commands or redirect output to log files.
Example:
#!/bin/bash
log_file="/path/to/log_file.log"
# Log the start of the script
echo "Script started at $(date)" >> $log_file
# Perform some task and log the result
if [ -f "/path/to/file" ]; then
echo "File exists." >> $log_file
else
echo "File does not exist." >> $log_file
fi
# Log the end of the script
echo "Script ended at $(date)" >> $log_file
After debugging, you can improve the performance and efficiency of your scripts by optimizing them. Optimization focuses on making your script run faster, use fewer resources, or be easier to maintain.
a. Avoiding Unnecessary SubshellsEach time you use a command within $(), it spawns a subshell. Reduce the number of subshells for better performance.
Example:
Instead of:
Use:
This avoids spawning an extra process.
b. Use Built-in Shell CommandsBuilt-in shell commands ([[ ]], test, let, echo) are faster than external commands (expr, grep, awk). Prefer built-in commands for common tasks.
Example:
Instead of:
Use:
Try to minimize the number of iterations in loops by handling as much work as possible outside of the loop. Avoid using loops when a built-in command can handle the task more efficiently.
Example:
Instead of:
for file in $(ls *.txt); do
# process files
done
Use:
for file in *.txt; do
# process files
done
This avoids spawning unnecessary subshells for each ls command.
d. Use xargs for Efficient PipeliningWhen working with a large number of files, xargs can speed up execution by processing multiple files at once, rather than one at a time.
Example:
Instead of:
find . -name "*.txt" | while read file; do
rm "$file"
done
Use:
This uses xargs to remove all files in a single command rather than repeatedly calling rm.
These advanced scripting techniques, debugging, and optimization methods, you'll be able to write more efficient and reliable scripts, making your workflows more automated and error-resistant.
Write a script that includes two functions:
Create a script that:
Write a script that:
Create a script that:
Write a script that:
A script is supposed to create a backup of a file but has some errors. Debug the following script and make it work correctly:
#!/bin/bash
set -x
backup_file="/home/user/documents/file.txt.bak"
source_file="/home/user/documents/file.txt"
if [-f "$source_file"]; then
cp $source_file $backup_file
echo "Backup successful."
else
echo "Source file does not exist."
fi
Fix the errors and run the script in debug mode.
Write a script that:
Use trap to handle the error when the file is not found.
Create a script that:
Test the script with and without correct input to see how the set options work.
Create a script that:
Write a script that:
Project: Setting Up a Web Server (Apache) and a File Server (Samba)
Project OverviewEach project will include the installation, configuration, and basic management of these services, allowing users to apply their knowledge of permissions, networking, security, and system performance monitoring.
Open the terminal on your Linux server or virtual machine.
Update the package list:
sudo apt update
Install Apache:
Start and enable Apache to start on boot:
Open the Apache configuration file:
Modify the DocumentRoot to point to the directory where your website files will be stored. For example:
Save and exit the file (Ctrl+X, then Y to save).
Step 3: Create Your WebsiteCreate the directory for your website:
Add some simple HTML content, you can find examples over the web and pase in to a file.
Save and exit the file.
Step 4: Adjust PermissionsSet the correct ownership and permissions for the web files:
Restart Apache to apply the changes:
Open a web browser and type in the IP address of your server. You should see the web page you just created!
If the server is running locally, you can open http://localhost or http://your-server-ip-address. You can use ip addr or ifconfig to find your ip.
Step 6: Basic Security PracticesEnable the UFW firewall (if not enabled):
Allow HTTP and HTTPS traffic through the firewall:
Install the Samba package:
Create a directory that will be shared across the network:
Set the appropriate permissions for the directory:
Open the Samba configuration file:
Scroll to the bottom of the file and add the following configuration for your shared folder:
[SharedFolder]
path = /srv/samba/share
browseable = yes
read only = no
guest ok = yes
Save and exit the file.
Step 4: Restart SambaRestart the Samba service to apply the changes:
On a Windows or Linux machine, open the file manager and try to access the server using its IP address.
On Windows, you can open \\your-server-ip-address\SharedFolder in File Explorer.
On Linux, you can open the file manager and use the smb://your-server-ip-address/SharedFolder URL to access the shared folder.
Web Server (Apache) Exercises:
Create a Custom Virtual Host:
Add a New Web Page:
Configure HTTPS with a Self-signed Certificate:
Enable Directory Listing:
Create a Password-protected Share:
Create Multiple Shared Folders:
Monitor Samba Activity:
System Monitoring Tasks:
Monitor Apache Performance:
Monitor Samba Activity:
Basic Security Tasks:
Secure the Samba Share with User Authentication:
Firewall Configuration:
Step-by-Step Answer:
Create a New Virtual Host Configuration File:
Copy the default Apache virtual host configuration to a new file:
Edit the New Virtual Host File:
Open the new configuration file:
Change the DocumentRoot to point to a new directory:
Save and exit the file.
Create the Directory for the New Website:
Create a Simple index.html File:
Inside the new directory, create the index.html:
Add some simple content:
Update /etc/hosts to Map the Domain:
Open the /etc/hosts file to add a domain mapping:
Add the following line:
127.0.0.1 mynewwebsite.local
Enable the Virtual Host:
Enable the new site:
Disable the default site if necessary:
Reload Apache:
Test the Website:
Open a web browser and go to http://mynewwebsite.local. You should see your new site!
Step-by-Step Answer:
Create a New Directory for the Second Page:
Create the about.html File:
Add some content about a new page:
Link the about.html to the Home Page:
Open the index.html file in the main directory:
Add a link to the new page:
href="/about/about.html">About Us
you may need too add a tags to the code above
Test the New Page:
Open a browser and go to http://localhost or your server's IP address. You should see the link to the "About Us" page. Click it to ensure the second page is working.
Step-by-Step Answer:
Generate a Self-signed SSL Certificate:
Create a directory to store the SSL certificate:
Generate the certificate and key:
Follow the prompts and fill in the required information.
Enable SSL Module in Apache:
Edit the Virtual Host to Use HTTPS:
Open your virtual host configuration file:
Add the following inside the file, replacing the existing VirtualHost block:
VirtualHost *:443
DocumentRoot /var/www/html/mynewwebsite
ServerName mynewwebsite.local
SSLEngine on
SSLCertificateFile /etc/apache2/ssl/apache.crt
SSLCertificateKeyFile /etc/apache2/ssl/apache.key
/VirtualHost
Enable the SSL Site:
Restart Apache:
sudo systemctl restart apache2
Test HTTPS:
Add the Following to Enable Directory Listing:
Add the following Directory block:
Directory /var/www/html/mywebsite
Options Indexes FollowSymLinks
AllowOverride None
Require all granted
/Directory
Save and Exit the File.
Restart Apache:
Test Directory Listing:
Step-by-Step Answer:
Add a Samba User:
You will be prompted to enter a password for the Samba user.
Edit the Samba Configuration File:
Add the Following Share Block:
Add the following at the end of the file:
[ProtectedShare]
path = /srv/samba/protected
valid users = username
guest ok = no
read only = no
Save and exit the file.
Create the Shared Directory:
Restart Samba:
Access the Share:
Step-by-Step Answer:
Create Multiple Shared Directories:
Configure Samba for Both Folders:
Open the Samba configuration file:
Add the following blocks:
Restart Samba:
Test the Shares:
From another machine, connect to both shares (\\your-server-ip-address\PublicShare and \\your-server-ip-address\PrivateShare) to ensure they work as expected.
Exercise 3: Monitor Samba ActivityStep-by-Step Answer:
Check Samba Log Files:
Use tail to monitor Samba logs in real-time:
Access the Samba Share from a Client:
Perform some file access activities (e.g., copy a file to the shared folder).
Watch the log file and observe the entries for each access.
In week four, we expanded our focus to critical areas of system security, backup and recovery, task automation, advanced shell scripting, and hands-on project work. Here’s a recap of each area.
Security is essential in Linux, ensuring that systems are protected from unauthorized access, data breaches, and attacks.
Firewalls:
User and Permission Management: Regularly updating permissions and using secure passwords can help prevent unauthorized access.
Security Best Practices:
Implementing basic security concepts and firewall rules helps protect systems from potential threats and ensures secure access control.
A good backup strategy is essential to prevent data loss and facilitate system recovery in case of hardware failure, accidental deletion, or other issues.
Backup Tools:
Creating Backups:
Restoring Backups:
Regular backups and knowledge of restoration procedures ensure data availability and quick recovery from incidents.
Automating repetitive tasks increases efficiency and ensures consistency in system management.
cron:
anacron:
Task automation with cron and anacron helps maintain system health, perform regular maintenance, and free up time for administrators.
?This week, we moved beyond basic scripting to explore more complex shell scripting techniques, which enable custom solutions for administrative tasks.
Advanced Syntax and Logic:
Script Debugging:
Advanced scripting provides the flexibility to automate complex tasks, enhance scripts’ reliability, and build tools tailored to system requirements.
To reinforce these skills, we worked on practical projects that combine learned concepts and provide hands-on experience with system management.
Practical projects provide real-world applications of Linux administration skills, allowing users to implement and test what they’ve learned in realistic scenarios.
Below is a 10-question multiple-choice quiz for Week 4, covering topics like basic security, backup and recovery, task automation, advanced scripting, and practical projects. Each question includes 4 options, followed by the correct answers for self-assessment.
a) ufw
b) iptables
c) firewall-cmd
d) All of the above
a) Gives full permissions to the owner, and no permissions to others
b) Gives read and write permissions to everyone
c) Removes execute permissions from all users
d) Makes the file executable only for the owner
a) To store encrypted passwords
b) To configure which users and groups have sudo privileges
c) To list installed packages
d) To configure system-wide firewall rules
a) cp
b) tar
c) rsync
d) mv
a) cron is for one-time tasks, while anacron runs tasks repeatedly.
b) There is no difference between cron and anacron.
c) cron only works for root users, while anacron works for all users.
cron can run tasks at specific times, but anacron is used for systems that may be offline during the scheduled time.
a) Improves manual monitoring
b) Makes the system more secure by default
c) Reduces repetitive tasks and human error
d) Provides graphical user interfaces for task management
a) openssl enc
b) openssl genrsa
c) openssl x509
d) openssl req
a) Scheduling tasks
b) Synchronizing files between systems or directories
c) Configuring firewall rules
d) Viewing system logs
a) Terminates the script immediately
b) Silences error messages in the script
c) Enables debugging by showing each command as it is executed
d) Creates a backup of the script
a) Store backup files only on the primary disk of the system.
b) Ensure backups are encrypted and stored in multiple locations.
c) Disable user permissions for accessing backups.
d) Use only physical backup methods like DVDs.
Answer: d) All of the above
Answer: a) Gives full permissions to the owner, and no permissions to others
Answer: b) To configure which users and groups have sudo privileges
Answer: b) tar
Answer: d) cron can run tasks at specific times, but anacron is used for systems that may be offline during the scheduled time.
Answer: c) Reduces repetitive tasks and human error
Answer: a) openssl enc
Answer: b) Synchronizing files between systems or directories
Answer: c) Enables debugging by showing each command as it is executed
Answer: b) Ensure backups are encrypted and stored in multiple locations.
Cloud services allow businesses and individuals to access computing resources (like servers, storage, databases, networking, and software) over the internet. This model provides scalability, flexibility, cost savings, and enhanced performance compared to traditional on-premise infrastructure. Cloud service providers offer a variety of services, including infrastructure, platforms, and software, each catering to different needs.
Let's take a look at the three major cloud providers: Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP), as well as some additional players in the cloud space.
Azure is Microsoft's cloud computing platform, offering a wide array of services similar to AWS. It integrates seamlessly with Microsoft's products (like Windows Server, Active Directory, Office 365, and SQL Server), making it a popular choice for enterprises heavily reliant on the Microsoft ecosystem. Key Services:
Strengths:
Google Cloud Platform (GCP) is Google's cloud offering. It's particularly well-suited for data-driven and machine learning workloads, thanks to its powerful tools and infrastructure, including the same infrastructure that powers Google Search, Gmail, and YouTube. Key Services:
Strengths:
AWS is the largest and most comprehensive cloud provider, offering over 200 services. It is widely used by startups, enterprises, and government agencies due to its vast array of services and global infrastructure. Key Services:
Strengths:
IBM Cloud provides a mix of IaaS, PaaS, and SaaS services, with an emphasis on enterprise solutions, AI, and hybrid cloud. Key Services:
Strengths:
Oracle Cloud focuses heavily on enterprise solutions, databases, and ERP systems. It’s a popular choice for companies using Oracle’s database and application suites. Key Services:
Strengths:
Alibaba Cloud is a leading cloud provider in China and across Asia, with a growing presence globally. It is known for its scalability and affordability. Key Services:
Strengths:
DigitalOcean: Known for its simplicity and ease of use, DigitalOcean is popular among developers and small businesses for quick deployments of virtual machines and managed Kubernetes.
Linode: Another developer-focused cloud provider that offers affordable and simple cloud infrastructure solutions, especially for small businesses and startups.
Vultr: Similar to DigitalOcean and Linode, Vultr offers affordable and scalable compute resources. It is a favorite for hosting websites and lightweight applications.
Heroku: A PaaS service for deploying and managing applications, especially popular for developers who want to focus on coding without managing infrastructure. It is based on AWS but abstracts much of the complexity.
Infrastructure as a Service (IaaS): Provides virtualized computing resources like virtual machines, storage, and networking. (e.g., AWS EC2, Azure Virtual Machines, Google Compute Engine)
Platform as a Service (PaaS): Provides a platform allowing customers to develop, run, and manage applications without the complexity of building and maintaining infrastructure. (e.g., Heroku, AWS Elastic Beanstalk, Google App Engine)
Software as a Service (SaaS): Delivers software applications over the internet on a subscription basis. (e.g., Microsoft 365, Google Workspace, Salesforce)
Each cloud provider offers unique advantages tailored to different use cases. AWS is known for its broad ecosystem and flexibility, Azure for its integration with Microsoft products and enterprise tools, GCP for its strength in data and machine learning, while other providers like IBM Cloud, Oracle Cloud, and Alibaba Cloud focus on specific enterprise solutions.
Selecting a cloud provider depends on your organization’s requirements, including compute power, security, machine learning capabilities, or ease of integration with existing tools. Many businesses opt for a multi-cloud strategy, leveraging the strengths of different providers to optimize performance, cost, and capabilities.
Cost Efficiency:
Linux is an open-source operating system, meaning there are no licensing fees associated with its use. When running Linux in the cloud, this can lead to significant cost savings compared to proprietary systems, especially at scale. Cloud service providers often offer lower-cost Linux-based virtual machines (VMs).
Customization and Flexibility:
Linux offers a high degree of flexibility for customization, allowing users to modify the operating system to fit specific workloads or project needs. This flexibility is ideal for cloud environments, where businesses can tailor their systems to match specific application needs, from web servers to large-scale databases.
Security:
Linux has a strong reputation for being secure, with regular updates and a robust community that actively patches vulnerabilities. Cloud environments benefit from Linux's security features, as the OS is less prone to malware and can be more easily hardened through customized security configurations.
Scalability:
Linux is highly scalable and can handle large amounts of traffic and processing power, making it well-suited for cloud environments. Many cloud-native technologies, such as Kubernetes and Docker, rely on Linux-based containers and microservices architectures, which makes Linux the natural fit for scalable cloud applications.
Wide Range of Tools and Compatibility:
Linux supports a broad range of cloud-native tools, including automation tools, CI/CD pipelines, and monitoring systems. The OS is well-integrated into cloud services, and most cloud providers offer Linux distributions such as Ubuntu, CentOS, Debian, and Red Hat, making it compatible with nearly any type of cloud infrastructure.
Strong Community Support:
Linux has a vast, active user base and development community. This open-source ecosystem ensures access to a large pool of resources, forums, guides, and troubleshooting support for Linux in the cloud. Users can also benefit from contributions and innovations from the community.
Learning Curve:
Linux can be challenging for users who are new to the platform. Its command-line interface (CLI), file system structure, and user management are different from more familiar operating systems like Windows. This learning curve can be a barrier for organizations and users who need more time to train or onboard staff.
Compatibility Issues:
While Linux offers vast compatibility with cloud-native tools and services, it may not be compatible with some legacy software applications that were designed specifically for Windows or other operating systems. In some cases, organizations may need to find alternatives or workarounds, which can be time-consuming.
Limited Support for Proprietary Software:
Many proprietary software applications and enterprise solutions are optimized for Windows, and while Linux has its own open-source alternatives, compatibility issues can arise. Organizations relying on specific proprietary software may face challenges running it on Linux without using compatibility layers like Wine or virtualization, which can introduce complexity and performance overhead.
Management Overhead:
Linux often requires more manual configuration and system management than other operating systems. While cloud environments can help automate some of these tasks, administrators may still need to spend more time managing configurations, security patches, and system updates, especially in large-scale deployments.
Support and Documentation Differences:
While Linux has a large community, enterprise-level support may not be as comprehensive or immediately available compared to proprietary systems like Windows Server. For organizations that require high levels of dedicated support and service level agreements (SLAs), choosing the right Linux distribution with proper commercial backing (e.g., Red Hat) may be necessary, potentially adding costs.
Hardware Compatibility:
While Linux runs well on most cloud infrastructures, it can sometimes encounter hardware compatibility issues, particularly on specialized or older cloud hardware configurations. This is less of an issue in public clouds but may still be relevant in private cloud or hybrid cloud deployments using custom hardware.
Please bear in mind depending on the options you pick and the type that has pasted it may look a little different then what i have down below, but it is usally still pretty close and can guide you to what you need to do.
Step 1: Log into AWS Management Console Go to the AWS Management Console and log in.
In the search bar, type EC2 and select EC2 to open the EC2 Dashboard.
In the EC2 Dashboard, click Launch Instance.
On the "Launch an instance" page, provide the following details:
Name and Tags: Optionally, give your instance a name, such as MyLinuxVM.
Application and OS Images (Amazon Machine Image): Choose an Amazon Machine Image (AMI). Common Linux distributions are available, such as:
Instance Type: Select an instance type, such as t2.micro (part of the AWS Free Tier) or t3.micro for general-purpose use.
Key Pair (login): Select an existing key pair if you already have one, or create a new key pair by clicking Create a new key pair. Download and save this key file (.pem) securely, as you’ll need it to SSH into the instance.
Network settings: Ensure SSH (port 22) is enabled. If you plan to run a web server, you may also enable HTTP (port 80) and HTTPS (port 443) here.
In the Configure Storage section, select the storage options:
By default, an 8 GiB root volume is created with General Purpose SSD (gp2) storage. You can increase the size or change the storage type if needed.
If you need additional storage, you can add new volumes here as well.
Step 4: Configure Advanced Settings (Optional)In the Advanced Details section, you can set additional configuration options like instance roles, user data, and shutdown behavior. This is optional, so you can skip this for a basic setup.
Step 5: Review and LaunchAfter configuring the settings, review your instance configuration to ensure everything is set up correctly.
Click Launch Instance. AWS will now create the instance, which usually takes a few seconds.
Once connected, you can start installing and configuring software. For example, to update the package manager and install Nginx, you would run:
sudo yum update -y # for Amazon Linux
sudo yum install nginx -y
If you’re using Ubuntu, you would use sudo apt update and sudo apt install instead.
Step 8: Stop or Terminate the VM (When Done)When finished, you can either stop the instance to retain data but not run up charges or terminate it to delete it completely.
Go to the EC2 Dashboard.
Here’s a recap of the steps to create a Linux VM on AWS:
AWS EC2 offers a flexible, pay-as-you-go model and many configuration options. Let me know if you need further help or have any specific requirements!
Go to the Azure Portal.
Sign in with your credentials.
Image: Select the Linux distribution you want to use. You can choose from distributions like:
Size: Select a VM size. For testing purposes, a lower-tier instance like Standard_B1s or Standard_B2s is sufficient. You can adjust this depending on your needs.
Authentication Type: Select SSH public key or password
SSH:
Inbound port rules: Select Allow selected ports and check SSH (22). This allows you to SSH into the VM.
Step 4: Configure Disks (Optional)Under the Disks tab, choose the type of disk you want for your VM:
For this setup, you can choose Standard SSD or Standard HDD.
You can attach additional disks if needed, but for a simple VM, the default OS disk is sufficient.
Step 5: NetworkingOn the Networking tab, configure network settings:
Leave the rest of the networking settings as default unless you need custom settings.
Step 6: Management, Monitoring, and Tags (Optional)Tags (Optional): Tags help organize resources by grouping them logically (e.g., env=dev, owner=teamA). You can skip this for now.
Step 7: Review and CreateOnce the deployment is complete:
Once connected to your Linux VM, you can start installing software and configuring it based on your requirements.
For example, to update the package manager and install Apache, you would run:
When you're finished with the VM, remember to either stop or delete it to avoid incurring costs:
You've now successfully created and connected to a Linux VM on Microsoft Azure! Here's a recap of the main steps:
This is a basic VM setup, but Azure also provides advanced features like auto-scaling, networking customization, and integrations with services like Azure Active Directory, storage, and more.
Boot Disk:
Operating System: Select a Linux distribution. Common choices include:
Identity and API: This determines level of access for users. You can leave this as default for basic set up.
Firewall Rules:
Step 4: SSH Key Setup
Authentication: By default, Google Cloud uses its built-in SSH key management, but you can manually provide an SSH key if you want. There should be a SSH button top left, click this and it will ssh in for you.
Step 5: Review and Create the VMOnce the VM is created, you can access it via SSH in multiple ways.
Option 1: Use Google Cloud Console SSH
In the VM Instances dashboard, you’ll see your new VM listed.
Click the SSH button next to your VM’s name, and a browser-based SSH terminal will open. You’ll be logged into the VM automatically.
Once connected, you can start configuring your VM and installing software. For example, to update the package manager and install Nginx (a web server), you would run:
You can install any software package based on your use case, such as Apache, Docker, Python, etc.
Step 8: Stop or Delete the VM (When Done)You’ve successfully created a Linux virtual machine on Google Cloud Platform. Here’s a recap of the steps:
Google Cloud provides a flexible and scalable platform for hosting your Linux-based workloads. If you need to scale up or down, GCP makes it easy to adjust the resources allocated to your VM.
What is Docker? Docker is a platform that enables developers to create, deploy, and run applications in containers. A container packages an application and all its dependencies into a standardized unit that can run reliably on any computing environment, from a developer’s local machine to large-scale cloud infrastructure.
What is Containerization? Containerization is a lightweight form of virtualization where applications are isolated in containers. Unlike traditional virtual machines (VMs), containers share the host system’s operating system, which makes them more efficient, faster to start, and less resource-intensive. Containers run the same across different environments because they encapsulate all the necessary libraries and dependencies for the application.
Benefits of Docker and Containerization:If you want to save your installed tools and configurations, you can create an image from your container. Start Detached run docker run -d "container name" to run the container in detached mode (in the background).
1. Set Up Your Ubuntu EC2 Instance in AWSFrom your local machine, use SSH to connect to the instance:
Once connected, you’ll be in the terminal of your Ubuntu instance. You can also connect from inside AWS terminal.
3. Update and Install Docker on UbuntuIf Docker isn’t already installed on your EC2 instance, follow these steps:
Update package lists:
Install Docker dependencies:
Add Docker’s GPG key:
Add Docker’s repository:
Install Docker:
Verify Docker installation:
Test Docker by running:
With Docker installed, pull the CentOS image:
This command downloads the CentOS image from Docker Hub, which you can then use to create CentOS containers.
5. Run a CentOS ContainerCreate and start a CentOS container, giving it interactive access (-it), which allows you to run commands in the container:
This command:
Now, you’re inside the CentOS container, and you can start running commands or programs just as you would on a CentOS machine.
For example:
Check CentOS version:
Install additional packages (like curl or vim) using yum (CentOS’s package manager):
When you’re done, exit the container by typing:
This stops the container and returns you to the Ubuntu host shell.
8. (Optional) Start the Container AgainTo restart the container you previously created, first list your Docker containers to find the container ID:
Then start the container:
This allows you to pick up where you left off in the CentOS environment.
You now have a CentOS environment running in a Docker container on your Ubuntu AWS instance! You can use it for development, testing, or running CentOS-specific applications.
1. Pull the Official Kali Linux Docker ImageKali Linux offers an official Docker image on Docker Hub. To get it, use the following command:
This command pulls the latest Kali Linux Docker image with the rolling release version.
2. Run the Kali Linux ContainerOnce the image is downloaded, create and start a container from it:
-it: Runs the container interactively and allocates a pseudo-TTY, allowing you to interact with it directly.
kalilinux/kali-rolling: Specifies the Kali Linux Docker image.
This will open a shell in your Kali Linux container where you can start using it.
3. Update Kali Linux (Optional)Inside the Kali container, it’s good practice to update the package list and upgrade packages for the latest tools and fixes:
Kali Linux in Docker starts as a minimal installation. To install specific tools, use apt install inside the container. For example:
You can install any Kali tools available in the repositories as needed.
5. Exit the ContainerWhen you’re done, type exit to close the session. This stops the container.
Additional Tips
Save Container State: If you want to save your installed tools and configurations, you can create an image from your container.
Start Detached: Run docker run -d kalilinux/kali-rolling to run the container in detached mode (in the background).
Running Kali Linux in Docker is a great way to use Kali tools without a full installation, ideal for quick, contained security testing setups.
1. Start the Container with a NameFor example, if you want to start a Kali Linux container and name it kali-container, run:
Now, the container is running with the name kali-container.
2. Reconnect to the Container by NameIf the container is still running, you can attach to it using:
This command will reattach you to the container’s main process (if it's still running interactively).
3. Start and Connect to a Stopped Container by NameIf the container has stopped and you want to restart it and attach to it, use:
The -i option allows you to interact with the container as it runs.
4. Use docker exec to Run Commands in the Container by NameIf you want to open a new terminal session in the container, or the container is running in the background, you can use docker exec:
This command starts a new Bash session inside the container without stopping or restarting it.
Overview:
VirtualBox is an open-source virtualization software developed by Oracle. It allows users to run multiple operating systems as virtual machines on a single physical host machine. It’s widely used for testing, development, and training, supporting a range of guest OSes including Windows, Linux, and macOS.
Features of VirtualBox:
Basic VirtualBox Workflow:
Install VirtualBox: Download from VirtualBox’s official website.
Create a New Virtual Machine:
Run and Manage the VM:
Use Snapshots: Before making changes, take a snapshot to easily revert if needed.
Advantages of VirtualBox:
Overview:
KVM (Kernel-based Virtual Machine) is a virtualization technology built into the Linux kernel, transforming it into a hypervisor. It allows Linux to host multiple virtual machines with excellent performance and stability, and is ideal for Linux servers.
Features of KVM:
Basic KVM Workflow:
Install KVM and Related Tools:
Set Up Virtual Machines:
Network and Storage Configuration:
Start, Stop, and Manage VMs:
What is KVM?
KVM (Kernel-based Virtual Machine) is a tool that lets you turn a Linux computer into a hypervisor, which is basically a host machine that can run multiple virtual computers (called virtual machines or VMs) on it. With KVM, each VM acts like a separate computer, complete with its own operating system, apps, and settings, all running on one physical computer.
How KVM WorksKVM works by converting your Linux computer’s kernel (the core part of the OS that communicates with hardware) into a tool that can manage multiple virtual environments. This means each VM you create can operate independently, as if it were a separate computer. Since KVM is built directly into the Linux kernel, it’s very fast and efficient.
What Makes KVM Different from Other Virtualization Tools?KVM is unique because it is part of the Linux kernel. Other virtualization tools, like VirtualBox, work on top of the operating system as an application, while KVM is directly integrated into Linux. This integration makes KVM a powerful and efficient choice if you're already running a Linux server or workstation and want to run VMs on it.
Example of Using KVMImagine you’re a developer working on multiple projects. Some projects require different environments, like a specific version of Linux, Windows, or even a different Linux distribution. With KVM:
Each VM has its own operating system and can run as if it were an independent machine, even though they’re all running on the same physical computer.
Advantages of Using KVMKVM is ideal if:
A hypervisor is a piece of software, firmware, or hardware that enables you to create and manage virtual machines (VMs) on a single physical computer. It allows multiple VMs to share the same physical hardware resources, like CPU, memory, and storage, while keeping them separated from each other. Each VM behaves as if it’s a separate, standalone computer, with its own operating system and applications.
How a Hypervisor WorksThe hypervisor sits between the physical hardware and the virtual machines. It acts as a traffic controller, allocating resources (like CPU time, memory, and disk space) to each VM as needed. Because the VMs are isolated, they can run different operating systems and applications without interfering with one another.
Types of HypervisorsThere are two main types of hypervisors:
Type 1 (Bare-Metal) Hypervisors: These hypervisors run directly on the computer’s hardware, without an underlying operating system. This setup is often used in data centers and enterprise environments because it offers high performance and stability.
Examples: VMware ESXi, Microsoft Hyper-V, Xen
Type 2 (Hosted) Hypervisors: These hypervisors run on top of an existing operating system, like an application. This setup is more common for desktop users who want to run multiple OS environments for development, testing, or general use.
Examples: VirtualBox, VMware Workstation, Parallels
What the Hypervisor DoesHypervisors are incredibly useful for tasks like:
A hypervisor allows a single computer to host multiple, isolated virtual environments, making it essential for virtualization, cloud computing, and efficient use of computing resources.
Comparison of Docker and Virtualization (VirtualBox/KVM)Feature
Isolation
Resource Usage
Use Case
Boot Time
Compatibility
Security
Docker
Process-level isolation
Lightweight, shares OS kernel
Microservices, cloud-native applications
Very fast (seconds)
Works best with Linux apps
Good but shared kernel
VirtualBox/KVM
Full OS-level isolation
More resource-intensive, full OS
Full OS testing, isolated environments
Slower (minutes)
Full OS compatibility
Stronger isolation per VM
Both Docker and virtualization solutions like VirtualBox or KVM are essential tools in modern IT. Docker is ideal for microservices, CI/CD pipelines, and cloud-native applications where quick scaling and portability are priorities. Virtualization tools are better suited for testing, OS isolation, and environments where a fully independent OS is needed.
laymen's terms for Docker and VMsImagine you have a powerful computer, and you want to run multiple applications or operating systems on it, maybe for different projects. There are two main ways to do this: containerization (using something like Docker) and virtualization (using tools like VirtualBox or KVM). Think of these as ways to organize and separate your applications or systems to keep them from interfering with each other.
Containers (Docker)Containers are like small, self-contained “rooms” in your computer where you can put an application and all the things it needs to work. Each container has everything it needs, like its specific files, libraries, and tools, so it doesn’t have to rely on anything outside of it. But, all these rooms share the same walls and structure — meaning, they share your computer’s core, or kernel, which is the “brain” of your operating system. This makes containers very lightweight; they don’t need their own separate operating system because they use your computer’s core resources.
Example: Let’s say you’re building a website that needs specific software to run. You can set up a container with just that software and your website files. If you want to create a second container for another project, you can do that too, without worrying about messing up the first one. Both containers will run on the same system but act like separate mini-environments.
Avantages: Containers start very quickly and take up less space on your computer. If you want to move your application to another computer, you just move the container without changing anything. It’ll run the same way on any machine.
Virtual Machines (Virtualization with VirtualBox or KVM)Virtual machines (VMs), on the other hand, are like having multiple separate “computers” inside your one physical computer. Each virtual machine has its own full operating system (like a mini version of Windows or Linux) and is completely isolated from others. This is great if you need to run different operating systems, test software, or keep things very separate from each other.
Virtualization tools like VirtualBox or KVM create these virtual machines. They simulate everything a computer needs — from the processor to memory to storage — for each VM, making each one act like a separate computer. This is more resource-intensive because each VM is running its own operating system on top of the host system.
Example: Say you’re developing software that needs to work on both Linux and Windows. You can set up a Linux VM and a Windows VM, both on the same physical computer. Each VM will operate independently, so you can test the software in both environments without needing separate physical machines.
Advantages: VMs provide stronger separation since each one runs a full OS. This is useful when you need total separation between systems, especially when running programs or applications that need different operating systems or specific configurations.
Comparison of Containers and VMsSpeed: Containers (like Docker) start up in seconds because they don’t need to load a full OS. VMs take longer since they’re booting up a whole operating system.
Resource Use: Containers share resources with the host OS, so they use less memory and storage. VMs are like mini-computers with their own OS, so they use more resources.
Isolation: VMs are more isolated from each other because they don’t share the OS. This can be more secure but slower. Containers are more connected to the host OS, making them fast but with slightly less isolation.
When to Use Each OneContainers: If you’re building or running specific applications (like web apps or microservices) that don’t need their own full OS, containers are faster and easier. They’re great for modern cloud applications that need to be quickly scalable and portable.
Virtual Machines: If you need to run different operating systems or keep things highly separate (like for testing software on Windows and Linux), VMs are a better choice. They work well for tasks where complete isolation and OS-specific environments are necessary.
In short, containers are lightweight, flexible mini-environments, ideal for applications that need to run anywhere quickly. Virtual machines, meanwhile, are like full computers within a computer, providing strong isolation but with more overhead.
Set up a VM with VirtualboxRequirements:
Steps to Set Up a Virtual Machine in VirtualBox
1. Open VirtualBoxChoose the Type and Version:
Click Next to continue.
3. Allocate Memory (RAM)Choose "Create a virtual hard disk now" and click Create.
Select the type of hard disk:
Storage on physical hard disk:
Follow the OS installer’s instructions.
For example, if you’re installing Ubuntu:
Your VM is Now Ready!
Your virtual machine should be fully set up, running its own isolated operating system within VirtualBox. You can explore the OS, install software, and use it just like a separate computer.
The .bashrc file is a hidden configuration file located in the user’s home directory (~/.bashrc). It’s specific to the Bash shell and is loaded every time you start a new Bash session. This file lets you customize the shell by defining variables, aliases, functions, prompt appearance, and more. Common Customizations in .bashrc:
To apply changes, either restart the terminal or source the file:
Similar to .bashrc, .zshrc is a configuration file for the Zsh shell, located in the user’s home directory (~/.zshrc). It’s loaded every time a new Zsh session starts. Zsh is highly customizable and includes advanced features like better tab completion and path expansion. Common Customizations in .zshrc:
Apply changes with:
Oh-My-Zsh is a popular open-source framework for managing Zsh configurations. It provides themes, plugins, and other tools to enhance the Zsh experience, making it easy to configure and customize.
Key Features of Oh-My-Zsh:
Installing Oh-My-Zsh:
To install Oh-My-Zsh, you need curl or wget:
sh -c "$(curl -fsSL https://raw.githubusercontent.com/ohmyzsh/ohmyzsh/master/tools/install.sh)"
sh -c "$(wget https://raw.githubusercontent.com/ohmyzsh/ohmyzsh/master/tools/install.sh -O -)"
Or
sudo apt install zsh
you may need to run chsh -s $(which zsh). Then log out and back in.
After installation, you can enable themes and plugins by editing .zshrc.
Aliases are shortcuts for commonly used commands. Instead of typing a long command, you create a short alias that you can use in its place. Example Aliases:
In .bashrc or .zshrc:
Temporary Aliases (only for the current session):
Functions allow you to create more complex or multi-step commands and reuse them. Unlike aliases, functions can take arguments, making them more flexible. Example Functions:
Greeting Function:
greet() {
echo "Hello, $1!"
}
Usage: greet "Alice" --> Outputs: "Hello, Alice!"
Backup Function:
backup() {
cp -r "$1" "$1.bak"
echo "Backup of $1 created as $1.bak"
}
Usage: backup myfile.txt
To make functions persistent, add them to .bashrc or .zshrc and reload the file.
Tips for Effective Shell Customization
Customizing your shell environment is a powerful way to boost productivity and streamline your workflow. Aliases, functions, and advanced customization options help you tailor the terminal to your preferences and daily tasks.
Open the .bashrc file in your home directory:
Change the prompt to display the username, hostname, and the current directory by modifying the PS1 variable in .bashrc. Add this line at the end:
Save the file and reload .bashrc:
Verify that your prompt displays in the new format (e.g., user@hostname:/current/directory$).
In .bashrc, create an alias to simplify listing files with details. Add:
Create another alias to confirm before deleting files:
Save the file, reload .bashrc, and test the new aliases:
ll # Check if this lists files in long format
rm file.txt # See if it prompts for confirmation before deleting
In .bashrc or .zshrc, write a function called greet that takes a name as input and outputs a greeting message.
greet() {
echo "Hello, $1! Welcome to the system."
}
Reload the configuration file:
Test the function:
Add a function in .bashrc or .zshrc called backup to make a backup copy of a file with a .bak extension:
backup() {
cp "$1" "$1.bak"
echo "Backup of $1 created as $1.bak"
}
Reload the configuration file:
Test the function:
Open .bashrc and set the prompt to display with colors. Add this to your PS1 line:
This will display the username and hostname in green and the directory in blue.
Save the file, reload .bashrc, and check the new colorful prompt:
Install Oh-My-Zsh following the installation instructions on the Oh-My-Zsh website.
Open the .zshrc file and add plugins, for example:
Apply the new plugins by reloading .zshrc:
Explore the new features provided by the plugins (e.g., syntax highlighting, autosuggestions).
Edit the crontab file:
Add a cron job to display a greeting message at a specific time, for example, every day at 9 AM:
Save and exit. The message will appear in the system logs at the specified time.
Add a custom directory to your $PATH by editing .bashrc:
Create a directory called my_scripts and add a simple script inside:
Reload .bashrc and test the script:
diskusage() {
df -h --total | grep 'total'
}
Reload .bashrc and test the function:
Add an alias in .bashrc to show the last 5 commands you ran:
Reload .bashrc and try the alias:
Modify the alias to save the command history to a log file:
DevOps is a set of practices that combines software development (Dev) and IT operations (Ops) to enhance collaboration, automate processes, and improve the delivery pipeline's efficiency. Key to DevOps are tools and methodologies like Continuous Integration (CI), Continuous Delivery/Continuous Deployment (CD), and configuration management, which work together to streamline software development, testing, and deployment. Here’s a high-level overview of some foundational DevOps concepts and tools.
Continuous Integration (CI): This practice involves frequently merging code changes into a shared repository. Automated tests run each time code is committed, which helps catch issues early, ensuring the codebase remains stable.
Continuous Delivery (CD): Expands on CI by automating the release of code changes to testing and production environments. With CD, the software is always ready for deployment, but a human may initiate it.
Continuous Deployment: Similar to Continuous Delivery but goes one step further by automating the deployment process as well. Code changes are automatically pushed to production without human intervention.
CI/CD practices reduce the risk of code conflicts, speed up releases, and ensure high-quality code reaches end-users faster.
Various tools help facilitate DevOps practices, each serving a specific role in the software development and deployment lifecycle. Here’s an introduction to some widely used tools in DevOps:
AnsibleAnsible is an open-source tool primarily used for configuration management, application deployment, and automation. It allows you to define system configurations in simple text files (called playbooks) written in YAML, making it easy to read and write.
Key features:
Use case: Ansible can be used to set up a web server cluster, configure firewall rules, or automate repetitive tasks like user creation and package installations across multiple systems.
JenkinsJenkins is a popular, open-source automation server used to implement CI/CD pipelines. Jenkins allows developers to build, test, and deploy code automatically based on triggers like code commits or pull requests.
Key features:
Use case: Jenkins can automatically test code pushed to a repository, create builds for different environments (like development or staging), and deploy applications to servers or containers.
KubernetesKubernetes (K8s) is an open-source container orchestration platform used to manage, scale, and deploy containerized applications. Originally developed by Google, Kubernetes helps automate the deployment and management of applications across a cluster of servers.
Key features:
Use case: Kubernetes can manage a microservices architecture, where each service runs in its own container. Kubernetes would handle service discovery, scaling, and load balancing, ensuring seamless communication and performance across the entire application.
A typical DevOps workflow using these tools might look like this:
Code Changes: Developers push code changes to a Git repository.
Continuous Integration with Jenkins:
Jenkins detects the changes, runs automated tests, and, if successful, builds the code into a deployable package. If any tests fail, Jenkins notifies the developer to fix the issues.
Configuration Management with Ansible:
Ansible sets up the necessary server environment or configures the cloud infrastructure, preparing it for the application. Deployment to Kubernetes:
Jenkins deploys the application package to a Kubernetes cluster.
Kubernetes orchestrates the deployment, scaling containers as needed and balancing the load across multiple instances.
Monitoring and Logging:
Other DevOps tools (like Prometheus or ELK Stack) track performance and log errors, providing insights for continuous improvement.
DevOps tools like Ansible, Jenkins, and Kubernetes bring together the best practices of development and operations, enabling organizations to achieve faster, more reliable, and scalable software delivery. Understanding these tools and how they fit into CI/CD pipelines is essential for modern software engineering and infrastructure management.
In week five, we took our Linux skills into modern infrastructure management by exploring Linux in the cloud, containers and virtualization, shell customization, and DevOps tools. Here’s a breakdown of each section and the main concepts covered.
With Linux being the backbone of many cloud environments, understanding how it operates in the cloud provides flexibility and scalability for managing applications and infrastructure.
Advantages:
Disadvantages:
Linux in the cloud offers a powerful way to deploy and manage applications at scale but comes with trade-offs regarding security and data management.
Containers and virtualization allow multiple environments to operate on a single system, each with its isolated resources, which is critical for developing, testing, and deploying applications.
Docker and Containerization:
Virtualization and Hypervisors:
Containers are lightweight and ideal for applications, while VMs provide more isolation and are useful for running different OS environments.
Shell customization improves the user experience and productivity by making frequently used commands more accessible and creating personalized workflows.
.bashrc and .zshrc:
Creating Aliases and Functions:
Using zsh and oh-my-zsh:
Customizing the shell with aliases, functions, and configurations boosts productivity by reducing repetitive typing and organizing workflows.
DevOps tools bridge the gap between development and operations, supporting automation, continuous integration, and deployment.
Overview of CI/CD (Continuous Integration/Continuous Deployment):
DevOps Tools:
DevOps tools enable automation and improve deployment efficiency, making it easier to maintain reliable and scalable systems in complex environments.
a) Improved graphical user interface
b) Enhanced gaming performance
c) Ability to scale resources easily and on-demand
d) Exclusive software only available in the cloud
a) Running virtual machines
b) Customizing the shell environment
c) Managing cloud resources
d) Creating lightweight, portable containers for applications
a) Software or hardware that creates and manages virtual machines
b) A tool for customizing the shell
c) A file system specifically for virtual disks
d) A Linux distribution optimized for cloud environments
a) oh-my-zsh
b) apt
c) pip
d) vim-plug
a) Automating code compilation
b) Monitoring system performance
c) Creating lightweight virtual machines
d) Managing and orchestrating containers across multiple hosts
a) docker ps
b) docker ls
c) docker list
d) docker show
a) Configures firewall rules for the user
b) Executes commands or sets environment variables when a new shell session starts
c) Stores backup scripts
d) Automatically installs software updates
a) VirtualBox uses containers, while Docker uses virtual machines.
b) VirtualBox is free, while Docker is not.
c) Docker uses containers, while VirtualBox creates full virtual machines.
d) Docker is for Windows only, while VirtualBox is for Linux.
a) Automatically updating the operating system
b) Upgrading the software without manual intervention
c) Automatically creating backups
d) Adjusting the number of resources based on demand
a) Jenkins
b) Ansible
c) Kubernetes
d) Docker
c) Ability to scale resources easily and on-demand
d) Creating lightweight, portable containers for applications
a) Software or hardware that creates and manages virtual machines
a) oh-my-zsh
d) Managing and orchestrating containers across multiple hosts
a) docker ps
b) Executes commands or sets environment variables when a new shell session starts
c) Docker uses containers, while VirtualBox creates full virtual machines.
d) Adjusting the number of resources based on demand
b) Ansible