Monitoring Processes in Linux with the ps Command: checking web server Apache processes information

In the vast toolbox of Linux system monitoring utilities, the `ps` command stands out for its direct approach to tracking what’s happening on your server or desktop. Whether you’re a system administrator, a developer, or simply a curious user, knowing how to leverage `ps` can provide you with insights into the processes running under the hood of your Linux machine.

Why we use `ps` utility?

The `ps` command is versatile and powerful, offering a snapshot of currently running processes. It’s particularly handy when you need to verify whether a specific service, like the Apache web server, is running and how it’s behaving in terms of resource consumption.

Example Use Case: Checking Apache Processes

Httpd or Apache web server is a widely used web server software, is essential for serving web pages. If you’re managing a website or a web application, you might often need to check if Apache is running smoothly. Here’s how you can do that with `ps`:

ps auxwww | head -n 1; ps auxwww | grep httpd | grep -v grep

This command sequence is broken down as follows:

  • `ps auxwww`: Lists all running processes with a detailed output.
  • `head -n 1`: Displays the column headers.
  • `grep httpd`: Filters the list to show only Apache (`httpd`) processes.
  • `grep -v grep`: Excludes the `grep` command itself from the results.

Output Explained:

USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
root 21215 0.0 0.1 524056 30560 ? Ss 16:59 0:00 /usr/sbin/httpd -DFOREGROUND
apache 21216 0.0 0.0 308392 14032 ? S 16:59 0:00 /usr/sbin/httpd -DFOREGROUND

  • USER: The username of the process owner.
  • PID: Process ID.
  • %CPU and %MEM: CPU and memory usage.
  • VSZ and RSS: Virtual and physical memory sizes.
  • TTY: Terminal type.
  • STAT: Process status.
  • START: Start time of the process.
  • TIME: Cumulative CPU time.
  • COMMAND: Command line that started the process.

Going Further:

To explore more options and details about the `ps` command, consulting the manual page is always a good idea. Simply type:

man ps

This command brings up the manual page for `ps`, providing comprehensive information on its usage, options, and examples to try out.

Conclusion:

Understanding and utilizing the `ps` command can significantly enhance your ability to monitor and manage processes on your Linux system. It’s a fundamental skill for troubleshooting and ensuring that essential services like Apache are running as expected.

How to Find and Kill Processes in Linux: A Practical Guide

Managing processes efficiently is a fundamental skill for any Linux user. There are instances, such as when an application becomes unresponsive or is consuming too much memory, where terminating processes becomes necessary. This post builds on the basics covered in a previous article, “Monitoring Processes in Linux with the ps Command: checking web server Apache processes information“, and dives into how to find and terminate specific processes, using `httpd` processes as our example.

Finding `httpd` Processes

To list all active `httpd` processes, use the command:

ps aux | grep -v grep | grep "httpd"

This command filters the list of all running processes to only show those related to `httpd`. The `grep -v grep` part excludes the grep command itself from the results.

Understanding the Output

The output columns USER, PID, %CPU, and others provide detailed information about each process, including its ID (PID) which is crucial for process management.

Killing Processes Manually

To terminate a process, you can use the `kill` command followed by the process ID (PID):

kill -s 9 29708 29707 ...

Here, `-s 9` specifies the SIGKILL signal, forcing the processes to stop immediately. It’s important to use SIGKILL cautiously, as it does not allow the application to perform any cleanup operations.

Automating with a Script

For convenience, you can automate this task with a simple shell script:

#!/bin/bash

# Find PIDs of httpd processes
OLD_HTTPD_PIDS=$(ps aux | grep "httpd" | grep -v "grep" | awk '{print $2}')

# Loop through and kill each process
for FPID in ${OLD_HTTPD_PIDS}; do
  echo "Killing httpd process pid: ${FPID}"
  kill -s 9 ${FPID}
done

After saving the script as `/root/bin/kill_httpd.sh`, make it executable:

chmod -v 755 /root/bin/kill_httpd.sh

And run it:

/root/bin/kill_httpd.sh

Final Thoughts

Proper process management ensures the smooth operation of Linux systems. While SIGKILL is effective for unresponsive processes, understanding different signals and their effects allows for more nuanced control. Always proceed with caution, especially when terminating processes, to avoid unintended system behavior or data loss.

How to fix pass store is uninitialized on Ubuntu Linux in Docker setup

The error message you’re seeing, “pass store is uninitialized“, indicates that the `pass` utility, which Docker uses for secure password storage, hasn’t been set up yet. To initialize `pass` and resolve this error, follow these steps:

  1. **Install Pass**: If you haven’t already, ensure that the `pass` password manager is installed on your system. You can do this on a Debian-based system (like Ubuntu) using:
       sudo apt-get update
       sudo apt-get install pass
       
  2. Initialize the Password Store: The password store needs to be initialized with a GPG key. If you don’t have a GPG key yet, you’ll need to create one:
    • Generate a GPG Key (if needed):
           gpg --full-generate-key
           

      Follow the prompts to create your key. You’ll be asked to provide a name, email, and a passphrase. Remember or securely store the passphrase, as it’s needed to unlock the key.

    • List GPG Keys:
      After creating a GPG key, list your available GPG keys to find the ID of the key you want to use with `pass`:

           gpg --list-secret-keys --keyid-format LONG
           

      Look for a line that looks like `sec rsa4096/KEY_ID_HERE 202X-XX-XX [SC]`. The `KEY_ID_HERE` part is your key ID.

    • Initialize Pass:
      With your GPG key ID, initialize the `pass` store:

           pass init "KEY_ID_HERE"
           
  3. Verify Initialization: To verify that `pass` has been initialized and is working, try adding a test entry:
       pass insert docker-credential-helpers/test
       

    When prompted, enter a test password. You can then list the contents of your password store:

       pass
       
  4. Configure Docker to Use `pass`: Ensure Docker is configured to use `pass` by checking your `~/.docker/config.json` file. It should have a line that specifies `pass` as the credsStore or credential helper:

    {
    "credsStore": "pass"
    }

By following these steps, you should be able to initialize `pass` for Docker credential storage and resolve the “pass store is uninitialized” error. If you encounter any issues along the way, the error messages provided by each command can often give clues on how to proceed.

Secure Your Web Development Workflow: Generating and Using PGP Keys in phpStorm IDE on Linux

In today’s digital age, security is paramount, especially when it comes to web development. As developers, we handle sensitive information regularly, from user credentials to proprietary code. One way to enhance the security of your development workflow is by using Pretty Good Privacy (PGP) keys. In this guide, we’ll walk through the process of generating and utilizing PGP keys within the popular phpStorm IDE on a Linux environment.

Why using PGP Keys?

PGP keys provide a robust method for encrypting and decrypting sensitive data, ensuring confidentiality and integrity. By utilizing PGP keys, you can securely communicate with other developers, sign commits to verify authenticity, and encrypt sensitive files.

Step 1: Install GnuPG

Before generating PGP keys, ensure that GnuPG (GNU Privacy Guard) is installed on your Linux system. Most distributions include GnuPG in their package repositories. You can install it using your package manager:

sudo apt-get update
sudo apt-get install gnupg

Step 2: Generate PGP Keys

Open a terminal window and enter the following command to generate a new PGP key pair:

gpg --full-generate-key

Follow the prompts to select the key type and size, specify the expiration date, and enter your name and email address. Ensure to use the email associated with your phpStorm IDE account.

Step 3: Configure phpStorm

  1. Open phpStorm and navigate to File > Settings (or PhpStorm > Preferences on macOS).
  2. In the Settings window, expand the “Version Control” section and select “GPG/PGP.”
  3. Click on the “Add” button and browse to the location of your PGP executable (usually `/usr/bin/gpg`).
  4. Click “OK” to save the configuration.

Step 4: Import Your PGP Key

Back in the terminal, export your PGP public key:

gpg --export -a "Your Name" > public_key.asc

Import the exported public key into phpStorm:

  1. In phpStorm, go to File > Settings > Version Control > GPG/PGP.
  2. Click on the “Import” button and select the `public_key.asc` file.
  3. phpStorm will import the key and associate it with your IDE.

Step 5: Start Using PGP Keys

Now that your PGP key is set up in phpStorm, you can start utilizing its features:

  • Signing Commits: When committing changes to your version control system (e.g., Git), phpStorm will prompt you to sign your commits using your PGP key.
  • Encrypting Files: You can encrypt sensitive files before sharing them with collaborators, ensuring that only authorized individuals can access their contents.
  • Verifying Signatures: phpStorm will automatically verify the signatures of commits and files, providing an extra layer of trust in your development process.

By integrating PGP keys into your phpStorm workflow, you bolster the security of your web development projects, safeguarding sensitive data and ensuring the integrity of your codebase. Take the necessary steps today to fortify your development environment and embrace the power of encryption. Happy coding!

Counting the Number of Files in a Folder with Efficiency

Managing folders with a massive number of files can be a daunting task, especially when you need to quickly assess how many files are contained within. Thankfully, there are efficient ways to tackle this challenge using command-line tools.

Using `ls` and `wc`

One approach is to leverage the combination of `ls` and `wc` commands. By navigating to the target directory and executing a couple of commands, you can obtain the file count promptly.

cd /path/to/folder_with_huge_number_of_files1
ls -f | wc -l

Here’s a breakdown of what each command does:

  • `ls -f`: Lists all files in the directory without sorting.
  • `wc -l`: Counts the number of lines output by `ls`.

This method efficiently calculates the total number of files within the specified directory.

Using Perl Scripting

Alternatively, Perl provides another powerful option for counting files within a directory. With a concise script, you can achieve the same result with ease.

cd /path/to/folder_with_huge_number_of_files2
perl -e 'opendir D, "."; @files = readdir D; closedir D; print scalar(@files)."\n";'

In this Perl script:

  • `opendir D, “.”`: Opens the current directory.
  • `@files = readdir D;`: Reads the contents of the directory into an array.
  • `closedir D;`: Closes the directory handle.
  • `print scalar(@files).”\n”;`: Prints the count of files.

Both methods provide efficient solutions for determining the number of files in a directory, catering to different preferences and workflows.

Next time you find yourself grappling with a folder overflowing with files, remember these handy techniques to streamline your file management tasks.

Linux IPTables: limit the number of HTTP requests from one IP per minute (for CentOs, RHEL and Ubuntu)

Protecting Your Web Server: Implementing IP-based Request Limiting with IPTables on Linux

In the face of relentless cyber attacks, safeguarding your web server becomes paramount. Recently, our server encountered a barrage of requests from a single IP address, causing severe strain on our resources. To mitigate such threats, we employed IPTables, the powerful firewall utility in Linux, to enforce restrictions on the number of requests from individual IPs.

IPTables Rule Implementation (For CentOS/RHEL)

In our case, the lifesaving rule we implemented using IPTables was:

-A INPUT -p tcp --syn --dport 80 -m connlimit --connlimit-above 20 -j REJECT --reject-with tcp-reset

This rule effectively limits the number of simultaneous connections from a single IP address to port 80. Once the threshold of 20 connections is breached, any further connection attempts from that IP are rejected with a TCP reset.

To apply this rule, follow these steps:

  1. Edit IPTables Configuration File. Open the file `/etc/sysconfig/iptables` using your preferred text editor.
  2. Add the Rule. Insert the above rule above the line that allows traffic to port 80.
  3. Save the Changes. Save the file and exit the text editor.
  4. Restart IPTables Service. Execute the following command to apply the changes:
    # /sbin/service iptables restart
    

Upon completion, the IPTables service will be restarted, enforcing the new rule and restoring stability to your server.

Additional Example for Ubuntu Linux Distro

For Ubuntu Linux users, the process is slightly different. Below is an example of implementing a similar IPTables rule to limit requests from a single IP address on port 80:

sudo iptables -A INPUT -p tcp --syn --dport 80 -m connlimit --connlimit-above 20 -j REJECT --reject-with tcp-reset

This command accomplishes the same objective as the previous rule but is formatted for Ubuntu’s IPTables syntax.

Conclusion

In the ever-evolving landscape of cybersecurity, proactive measures like IP-based request limiting are crucial for safeguarding your web infrastructure. By leveraging the capabilities of IPTables, you can fortify your defenses against malicious attacks and ensure the uninterrupted operation of your services.

How to Identify IP Addresses Sending Many Requests in Ubuntu Linux

In today’s interconnected world, network security is paramount. One aspect of network security involves identifying and monitoring IP addresses that may be sending an unusually high volume of requests to your system. In Ubuntu Linux, several tools can help you accomplish this task effectively.

Using netstat

One of the simplest ways to identify IP addresses sending many requests is by using the `netstat` command. Open a terminal and enter the following command:

sudo netstat -ntu | awk '{print $5}' | cut -d: -f1 | sort | uniq -c | sort -nr

This command will display a list of IP addresses that have initiated numerous TCP connections to your system, sorted by the number of connections.

Utilizing tcpdump

Another powerful tool for network analysis is `tcpdump`. In the terminal, execute the following command:

sudo tcpdump -nn -c 1000 | awk '{print $3}' | cut -d. -f1-4 | sort | uniq -c | sort -nr

This command will capture the last 1000 packets and display a list of IP addresses involved in those packets, sorted by packet count.

Monitoring with iftop

If you prefer a real-time view of network traffic, `iftop` is an excellent option. If you haven’t installed it yet, you can do so with the following command:

sudo apt install iftop

Once installed, simply run `iftop` in the terminal:

sudo iftop

`iftop` will display a live list of IP addresses sending and receiving the most traffic on your system.

By utilizing these tools, you can effectively identify IP addresses that may be engaging in suspicious or excessive network activity on your Ubuntu Linux system. Monitoring and promptly addressing such activity can help enhance the security and performance of your network environment.

Stay vigilant and keep your systems secure!

Linux OpenSSL generate self-signed SSL certificate and Apache web server configuration

In a previous post, we covered the creation of a CSR and key for obtaining an SSL certificate. Today, we’ll focus on generating a self-signed SSL certificate, a useful step in development and testing environments. Follow along to secure your website with HTTPS.

Generating the SSL Certificate

To create a self-signed SSL certificate, execute the following command:

openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout www.shkodenko.com.key -out www.shkodenko.com.crt

This command generates a self-signed certificate valid for 365 days.

Configuring Apache

Next step, let’s configure Apache to use the SSL certificate. Add the following configuration to your virtual host file:

<IfModule mod_ssl.c>
<VirtualHost *:443>
ServerName shkodenko.com
ServerAlias www.shkodenko.com
DocumentRoot /home/shkodenko/public_html
ServerAdmin webmaster@shkodenko.com

SSLEngine on
SSLCertificateFile /etc/ssl/certs/www.shkodenko.com.crt
SSLCertificateKeyFile /etc/ssl/private/www.shkodenko.com.key

CustomLog /var/log/apache2/shkodenko.com-ssl_log combined

<FilesMatch "\.(cgi|shtml|phtml|php)$">
SSLOptions +StdEnvVars
</FilesMatch>

<Directory /home/shkodenko/public_html>
Options Indexes FollowSymLinks
AllowOverride All
Require all granted
</Directory>
</VirtualHost>
</IfModule>

This configuration sets up SSL for your domain, specifying the SSL certificate and key files.

Checking Syntax and Restarting Apache

Before restarting Apache, it’s crucial to check the configuration syntax:

apachectl -t

If the syntax is correct, restart Apache to apply the changes:

systemctl restart apache2

or

service apache2 restart

Ensure your website now loads with HTTPS. You’ve successfully generated a self-signed SSL certificate and configured Apache to use it!

Linux chkconfig and service: managing autostart and service state

In Red Hat-like Linux systems such as Red Hat Enterprise Linux, CentOS, Fedora (up to version 15), and similar distributions, service management often involves the use of the /sbin/chkconfig command.

To view the status of the NFS (Network File System) service, you can use the following command:

/sbin/chkconfig --list nfs

This command displays a list indicating whether the NFS service is enabled or disabled for each runlevel (0 through 6).

To enable the NFS service, execute:

/sbin/chkconfig nfs on

To verify the status of the NFS service, rerun the previous command:

/sbin/chkconfig --list nfs

Now, you can see that the NFS service is enabled for the appropriate runlevels.

To disable the NFS service from starting automatically, use:

/sbin/chkconfig nfs off

Check the status once more to confirm the changes:

/sbin/chkconfig --list nfs

To view the autoload status of all services on the system, use:

/sbin/chkconfig --list | more

For a comprehensive list of available command options, you can refer to the help documentation:

/sbin/chkconfig --help

Additionally, you can manage the NFS service directly using the `/sbin/service` command with various options:

/sbin/service nfs [start|stop|status|restart|reload|force-reload|condrestart|try-restart|condstop]

Some commonly used options include:

  • start: Start the service.
  • status: Check the current state of the service.
  • restart: Restart the service.
  • reload: Apply new configurations without restarting.