How to Integrate GitLab Cloud with Slack for Real-Time Notifications

Integrating GitLab Cloud with Slack can significantly enhance your development workflow by providing real-time notifications about commits, merge requests, pipeline statuses, and other repository activities. In this guide, we’ll walk through the process of setting up GitLab Cloud to send messages to Slack whenever important events occur.


Why Integrate GitLab with Slack?

With GitLab-Slack integration, you can:

  • Get real-time alerts on repository activities.
  • Improve team collaboration with instant updates.
  • Monitor pipeline statuses to track CI/CD workflows.
  • Stay informed about merge requests and commits without leaving Slack.

Step-by-Step Guide to GitLab-Slack Integration

Step 1: Enable Slack Integration in GitLab Cloud

  1. Log in to your GitLab Cloud account.
  2. Navigate to the project you want to integrate.
  3. Go to SettingsIntegrations.
  4. Scroll down and find Slack Notifications.

Step 2: Generate a Slack Webhook URL

To allow GitLab to send messages to Slack, you need to set up a webhook:

  1. Open Slack and go to your workspace.
  2. Click on your workspace name (top left corner) → Settings & AdministrationManage Apps.
  3. Search for "Incoming WebHooks" and select Add to Slack.
  4. Choose a Slack channel where GitLab notifications should appear (e.g., #git-updates).
  5. Click Add Incoming WebHooks integration.
  6. Copy the Webhook URL that Slack generates.

Step 3: Configure GitLab to Use the Webhook

  1. Return to the Slack Notifications settings in GitLab.
  2. Paste the Webhook URL into the provided field.
  3. Choose which events should trigger Slack notifications:
    • Push events (code commits)
    • Issue events (new issues, updates, or closures)
    • Merge request events (approvals, rejections, and updates)
    • Pipeline events (CI/CD status updates)
    • Tag push events (new releases or versions)
    • Wiki page events (if using GitLab Wiki)
  4. Click Save Changes.

Step 4: Customize Notifications (Optional)

If you need more control over what gets sent to Slack, consider these options:

  • Modify the Slack Webhook settings in GitLab.
  • Use Slack slash commands (e.g., /gitlab subscribe) to manage notifications.
  • Set up Slack workflows to format and filter messages for better clarity.

Testing the Integration

Once the setup is complete, test the integration by performing one of the following actions:

  • Commit a change to your GitLab repository.
  • Create a merge request.
  • Run a pipeline.

If everything is configured correctly, you should see a message in your Slack channel confirming the event.


Final Thoughts

Integrating GitLab with Slack streamlines communication and ensures that your team stays up-to-date on project progress. By following these steps, you can optimize your workflow and enhance team collaboration with real-time GitLab notifications in Slack.

🚀 Now it’s your turn! Try this setup and let us know how it improves your development workflow!


If you found this guide helpful, feel free to share it with your developer community! 🔥

Measuring HTTP Request Time with cURL in Linux

Measuring HTTP Request Time with cURL in Linux

When testing web application performance, one of the most useful tools at your disposal is curl. This command-line tool allows developers to measure request times, analyze response latency, and debug performance bottlenecks efficiently.

In this post, we’ll explore how you can use curl to measure HTTP request time, break down various timing metrics, and optimize your API calls for better performance.

Basic Usage: Measure Total Request Time

If you simply want to check how long a request takes from start to finish, use:

curl -o /dev/null -s -w "Time taken: %{time_total}s\n" https://example.com

Explanation:

  • -o /dev/null: Prevents output from being printed to the terminal.
  • -s: Runs in silent mode, hiding progress details.
  • -w "Time taken: %{time_total}s\n": Displays the total request time.

Detailed Timing Breakdown

If you’re debugging slow requests, you may want to break down the request into different phases:

curl -o /dev/null -s -w "Time Lookup: %{time_namelookup}s\nTime Connect: %{time_connect}s\nTime StartTransfer: %{time_starttransfer}s\nTotal Time: %{time_total}s\n" https://example.com

Key Metrics:

  • time_namelookup: Time taken to resolve the domain name.
  • time_connect: Time taken to establish a TCP connection.
  • time_starttransfer: Time until the server starts sending data.
  • time_total: Total time taken for the request.

Saving Results to a Log File

To store request timing data for analysis, append output to a log file:

curl -o /dev/null -s -w "%{time_total}\n" https://example.com >> perf_log.txt

Automating Multiple Requests

If you want to test multiple requests and analyze response times:

for i in {1..5}; do curl -o /dev/null -s -w "Request $i: %{time_total}s\n" https://example.com; done

This will send five requests and print the total time for each.

Comparing HTTP vs. HTTPS Performance

To compare response times for an API running over HTTP and HTTPS:

curl -o /dev/null -s -w "HTTP Time: %{time_total}s\n" http://example.com
curl -o /dev/null -s -w "HTTPS Time: %{time_total}s\n" https://example.com

You might notice HTTPS takes slightly longer due to encryption overhead.

Using cURL with Proxy for Network Debugging

If you’re testing behind a proxy, you can measure request times using:

curl -x http://proxy.example.com:8080 -o /dev/null -s -w "Total Time: %{time_total}s\n" https://example.com

Final Thoughts

Understanding HTTP request timing is crucial for optimizing API response times and diagnosing performance bottlenecks. By leveraging curl‘s timing metrics, developers can effectively analyze and improve web application performance.

Do you use curl for performance testing? Share your experiences in the comments below!

Ignoring Local Changes to Files in Git

Ignoring Local Changes to Files in Git: The Power of --assume-unchanged

Introduction

As developers, we often work on projects where certain files, like composer.lock, are frequently updated locally but should not be committed to the repository. However, adding these files to .gitignore might not be the best approach, especially if they need to be tracked in the repository but ignored temporarily.

This is where Git’s --assume-unchanged flag comes in handy! In this blog post, we’ll explore what it does, when to use it, and how to revert the changes when needed.

What Does git update-index --assume-unchanged Do?

The command:

 git update-index --assume-unchanged composer.lock

Explanation:

  • git update-index is a low-level Git command that modifies the index (staging area).
  • --assume-unchanged tells Git to mark the file as "unchanged" in the working directory.

Effect of Running This Command:

  • Git stops tracking modifications to the specified file (composer.lock in this case).
  • If you edit composer.lock, Git won’t detect the changes and won’t include them in future commits.
  • The file remains in the repository, but any local modifications stay untracked.

When Should You Use --assume-unchanged?

This feature is useful in the following scenarios:

  • You have local environment-specific changes in a file (like composer.lock or .env) that you don’t want to commit but also don’t want to ignore permanently.
  • You are working on a project where certain configuration files keep changing, but you don’t want those changes to show up in git status every time.
  • You need a temporary workaround instead of modifying .gitignore or creating a separate local branch.

How to Check If a File Is Marked as --assume-unchanged?

To check whether a file has been marked with --assume-unchanged, use:

 git ls-files -v | grep '^h'

Files marked with an h are assumed to be unchanged.

Reverting the --assume-unchanged Status

If you later decide that you want Git to track changes to the file again, use:

 git update-index --no-assume-unchanged composer.lock

This command removes the "assume unchanged" flag, allowing Git to detect modifications as usual.

Important Considerations

  • This does not remove the file from version control; it only affects local modifications.
  • Other developers won’t be affected by this command—it’s purely a local setting.
  • If you pull new changes from a remote repository that modify the file, you may experience conflicts.

Alternative Approach: .git/info/exclude

If you want to ignore a file only for yourself, without modifying .gitignore, you can add it to .git/info/exclude:

 echo 'composer.lock' >> .git/info/exclude

This works similarly to .gitignore but applies only to your local repository.

Conclusion

Using git update-index --assume-unchanged is a great way to temporarily ignore changes to tracked files in Git. It’s particularly useful for developers working on projects where some files change frequently but shouldn’t be committed every time.

Next time you’re tired of seeing local changes cluttering your git status, try this command and make your workflow cleaner!

Have You Used This Command Before?

If you have any tips or experiences using --assume-unchanged, share them in the comments! 🚀

Analyzing Apache Benchmark (ab) Test Results Using Python and Tesseract OCR

Introduction

Performance testing is an essential part of web application development. Apache Benchmark (ab) is a popular tool for load testing APIs and web applications. However, when working with multiple test results in the form of screenshots, analyzing them manually can be cumbersome.

In this article, we will demonstrate how to extract performance data from ab test result screenshots using Python and Tesseract OCR. We will then compare different test runs to identify performance trends and bottlenecks.


Extracting Text from ab Test Screenshots

To automate the extraction of data from Apache Benchmark screenshots, we will use pytesseract, an OCR (Optical Character Recognition) library that allows us to read text from images.

Prerequisites

Before running the script, install the required dependencies:

pip install pytesseract pillow pandas

Also, make sure Tesseract OCR is installed on your system:

  • Ubuntu/Debian:
    sudo apt update
    sudo apt install tesseract-ocr
  • Windows:
    Download and install Tesseract from UB Mannheim.

After installation, verify that tesseract is available by running:

tesseract --version

Python Script for Extracting Text

The following Python script extracts text from two ab test screenshots and prints the results:

import pytesseract
from PIL import Image

# Path to the images
image_path_1 = "path/to/first_ab_test_screenshot.png"
image_path_2 = "path/to/second_ab_test_screenshot.png"

# Extract text from images using Tesseract
text_1 = pytesseract.image_to_string(Image.open(image_path_1))
text_2 = pytesseract.image_to_string(Image.open(image_path_2))

# Print extracted text
print("Extracted Text from Test 1:
", text_1)
print("
Extracted Text from Test 2:
", text_2)

This script reads the images and extracts all text, including metrics such as requests per second, response times, and failure rates.


Comparing Apache Benchmark Test Results

Once we have extracted the text, we can analyze key performance metrics from multiple test runs.

Example of Performance Comparison

Here’s an example of comparing two test runs:

Metric Test 1 Test 2 Conclusion
Total Requests 4021 4769 Test 2 handled more requests
Requests per Second 57.44 67.78 Test 2 is more performant
Mean Response Time (ms) 348.2 295.1 Test 2 has lower response time
Max Response Time (ms) 1480 1684 Test 2 has some slow spikes
Transfer Rate (KB/s) 35.99 42.46 Test 2 has better data transfer

Key Insights:

Test 2 performed better in terms of handling more requests and achieving a lower average response time.
Transfer rate improved, meaning the system processed data more efficiently.
⚠️ Max response time in Test 2 increased, indicating some requests experienced higher latency.

Next Steps for Optimization

If we observe performance degradation, here are some actions we can take:

  • Check backend logs to identify slow database queries or API calls.
  • Monitor CPU & Memory Usage during the test to detect potential resource bottlenecks.
  • Optimize database queries using indexes and caching.
  • Load balance traffic across multiple servers if the system is reaching capacity.

Conclusion

This approach demonstrates how Python, pytesseract, and ab test results can be combined to automate performance analysis. By extracting and comparing key metrics, we can make informed decisions to optimize our web applications.

🚀 Next Steps: Try this approach with your own API performance tests and share your insights with the community!


Further Reading


📢 Do you have experience analyzing ab test results? Share your findings in the comments below!

How to Count the Number of Files in a Folder Efficiently (Even for Large Directories)

When working with folders that contain a huge number of files, counting them efficiently becomes crucial, especially in high-performance or automated environments. In this guide, we’ll explore different ways to count files in a directory using Linux command-line tools and Perl scripting.

📌 Method 1: Using ls and wc (Fast and Simple)

If you’re dealing with a directory containing millions of files, the standard ls command can be slow because it sorts files by default. To improve performance, use the -f flag to disable sorting:

cd /path/to/large_directory
ls -f | wc -l

🔹 Breakdown of the command:

  • ls -f → Lists all files and directories without sorting (faster for large folders).
  • wc -l → Counts the number of lines in the output (which equals the number of entries).

💡 Note: This method counts hidden files (. and ..) as well. If you want to exclude them, use:

ls -A | wc -l

📌 Method 2: Using find (More Reliable)

A more accurate way to count only regular files (excluding directories and special files) is using find:

find /path/to/large_directory -type f | wc -l

🔹 Why use find?

  • Ignores directories, counting only files.
  • Works well with huge directories (doesn’t load everything into memory).

💡 Tip: If you want to count files recursively inside subdirectories, find is the best choice.


📌 Method 3: Using Perl (For Scripting Enthusiasts)

If you prefer Perl, you can use this one-liner:

cd /path/to/large_directory
perl -e 'opendir D, "."; @files = grep {!/^\.{1,2}$/} readdir D; closedir D; print scalar(@files)."
";'

🔹 How it works:

  • Opens the directory.
  • Uses readdir to fetch all entries.
  • Filters out . and .. (current and parent directory).
  • Prints the total number of files.

📌 Method 4: Using stat (Ultra-Fast for Linux Ext4)

For users running Linux with an Ext4 filesystem, you can use stat for an instant count:

stat -c "%h" /path/to/large_directory

🔹 This method is nearly instantaneous but only works reliably if no hard links exist.


🏆 Which Method is Best?

Method Speed Works for Large Directories? Excludes Directories?
ls -f | wc -l ⚡ Fast ✅ Yes ❌ No (counts all entries)
find -type f | wc -l ⏳ Slower ✅ Yes ✅ Yes
Perl Script ⏳ Medium ✅ Yes ✅ Yes
stat (Ext4) 🚀 Instant ✅ Yes ❌ No

📌 Conclusion

  • Use ls -f | wc -l for quick estimations.
  • Use find -type f | wc -l for accurate file-only counts.
  • Use Perl if you need scripting flexibility.
  • Use stat if you’re on Ext4 and need lightning-fast results.

🔹 Which method do you prefer? Let us know in the comments! 🚀


This improved version is more SEO-friendly because:

  • It includes relevant keywords like count files in Linux, large directories, fast file counting, shell script for counting files, etc.
  • It has subheadings for better readability.
  • It includes a comparison table and different use cases for more engagement.
  • It has a conclusion with a call to action to encourage interaction.

Would you like any additional tweaks? 🚀

Ultimate Guide to Installing Software on Ubuntu 24.04

Ubuntu 24.04 is a powerful and user-friendly Linux distribution, but new users often wonder how to install software efficiently. In this guide, we’ll explore multiple ways to install applications, from traditional package managers to direct .deb installations.

1. Installing Software via APT (Recommended)

APT (Advanced Package Tool) is the default package manager in Ubuntu. It’s the easiest and safest way to install software as it handles dependencies automatically.

To install a package, use the following command:

sudo apt update && sudo apt install package-name

For example, to install VLC media player:

sudo apt update && sudo apt install vlc

2. Installing Software via Snap

Snap is a universal package format supported by Canonical. Snaps are self-contained and include dependencies, making them easy to install.

To install a Snap package, use:

sudo snap install package-name

For example, to install the latest version of Spotify:

sudo snap install spotify

3. Installing Software via Flatpak

Flatpak is another universal package format. First, install Flatpak support:

sudo apt install flatpak

Then, add the Flathub repository:

flatpak remote-add --if-not-exists flathub https://flathub.org/repo/flathub.flatpakrepo

To install an application, use:

flatpak install flathub package-name

For example, to install GIMP:

flatpak install flathub org.gimp.GIMP

4. Installing Software from a .deb Package

Some applications provide .deb installation files, which you can download from their official websites. To install a .deb package, use:

sudo dpkg -i package-name.deb

For example, to install Google Chrome:

wget https://dl.google.com/linux/direct/google-chrome-stable_current_amd64.deb
sudo dpkg -i google-chrome-stable_current_amd64.deb

If there are missing dependencies, fix them with:

sudo apt -f install

5. Installing Software via AppImage

AppImage is a portable application format that doesn’t require installation. Simply download the AppImage file, make it executable, and run it:

chmod +x application.AppImage
./application.AppImage

For example, to run Krita:

wget https://download.kde.org/stable/krita/5.2.2/krita-5.2.2-x86_64.appimage
chmod +x krita-5.2.2-x86_64.appimage
./krita-5.2.2-x86_64.appimage

6. Installing Software via PPA (Personal Package Archive)

Some applications are not available in the official repositories, but developers provide PPAs. To add a PPA and install software:

sudo add-apt-repository ppa:repository-name
sudo apt update
sudo apt install package-name

For example, to install the latest version of LibreOffice:

sudo add-apt-repository ppa:libreoffice/ppa
sudo apt update
sudo apt install libreoffice

Conclusion

Ubuntu 24.04 offers multiple ways to install software, each suited for different scenarios. For most users, APT and Snap are the easiest options, while .deb packages and PPAs are useful for getting the latest software releases. Choose the method that works best for you and enjoy your Ubuntu experience!

How to Get PostgreSQL Version Using SQL Query

When working with PostgreSQL, it’s often necessary to check the version of the database server to ensure compatibility with features, extensions, and security updates. Here’s a quick guide on how to retrieve the PostgreSQL version using SQL queries.

Using version() Function

The simplest way to get detailed PostgreSQL version information is by running the following SQL query:

SELECT version();

This will return a string containing the PostgreSQL version along with additional system details. For example:

PostgreSQL 15.2 (Ubuntu 15.2-1.pgdg22.04+1) on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu 11.3.0-1ubuntu1~22.04) 11.3.0, 64-bit

Using SHOW server_version

If you only need the numeric version of PostgreSQL (without extra system details), you can use:

SHOW server_version;

This will return a cleaner output, such as:

15.2

Why Knowing the PostgreSQL Version Matters

  • Feature Compatibility: Some features are only available in specific PostgreSQL versions.
  • Performance Improvements: PostgreSQL frequently enhances performance and query optimization.
  • Security Updates: Keeping your database up-to-date ensures security patches are applied.

By using these simple queries, you can quickly determine the PostgreSQL version and ensure your database environment is up-to-date and compatible with your applications.


Do you find this helpful? Follow our community for more PostgreSQL and database-related tips!

Changing Domain from rndpwd.info to rndpwd.shkodenko.com

Random Password Generator

We are excited to announce that our service, previously accessible at https://rndpwd.info, has now moved to a new domain: https://rndpwd.shkodenko.com.

Why the Change?

This transition allows us to integrate our service under a unified domain, making it easier to manage and ensuring better branding consistency. All functionalities remain the same, and we are committed to providing the same level of security and performance as before.

What You Need to Do

If you have been using https://rndpwd.info, simply update your bookmarks and any API integrations to point to https://rndpwd.shkodenko.com.

Redirects and Support

To ensure a smooth transition, we have implemented automatic redirects from the old domain. However, if you encounter any issues, feel free to reach out.

Thank you for your continued support!


Any comments, donations and support for us is very welcomed. 😊

Extracting and Using an RSA Public Key for JWT Verification in Laravel

Introduction

When working with JWT authentication in Laravel, you may encounter the error:

openssl_verify(): Supplied key param cannot be coerced into a public key

This typically happens when verifying an RS256-signed JWT with an incorrect or improperly formatted public key. In this guide, we’ll walk through the steps to extract and use the correct RSA public key for JWT verification.

Understanding the Issue

JWT Header Inspection

Before solving the issue, inspect the JWT header to determine the signing algorithm:

echo "YOUR_JWT_TOKEN_HERE" | cut -d "." -f1 | base64 --decode

If you see something like this:

{
  "alg": "RS256",
  "kid": "public:01fa2927-9677-42bb-9233-fa8f68f261fc"
}
  • "alg": "RS256" means the token is signed using RSA encryption, requiring a public key for verification.
  • "kid" (Key ID) helps locate the correct public key.

Finding the JWKS (JSON Web Key Set) URL

Public keys for JWT verification are often stored in a JWKS endpoint. If your JWT includes an iss (issuer) field like:

"iss": "https://id-int-hydra.dev.local/"

Try accessing the JWKS URL:

https://id-int-hydra.dev.local/.well-known/jwks.json

Use this command to retrieve the public key information:

curl -s https://id-int-hydra.dev.local/.well-known/jwks.json | jq

Extracting the RSA Public Key from JWKS

If the JWKS response contains:

{
  "keys": [
    {
      "kid": "public:01fa2927-9677-42bb-9233-fa8f68f261fc",
      "kty": "RSA",
      "alg": "RS256",
      "n": "base64url-encoded-key",
      "e": "AQAB"
    }
  ]
}

You need to convert the n and e values to PEM format.

Python Script to Convert JWKS to PEM

Create a script (convert_jwks_to_pem.py) with the following code:

import json
import base64
from cryptography.hazmat.primitives.asymmetric import rsa
from cryptography.hazmat.primitives import serialization
from cryptography.hazmat.backends import default_backend

# Example JWKS response
jwks = {
  "keys": [
    {
      "kid": "public:01fa2927-9677-42bb-9233-fa8f68f261fc",
      "kty": "RSA",
      "alg": "RS256",
      "n": "base64url_encoded_n_value",
      "e": "AQAB"
    }
  ]
}

def base64url_decode(input):
    input += '=' * (4 - (len(input) % 4))  # Pad correctly
    return base64.urlsafe_b64decode(input)

key = jwks["keys"][0]
modulus = int.from_bytes(base64url_decode(key["n"]), byteorder='big')
exponent = int.from_bytes(base64url_decode(key["e"]), byteorder='big')

public_key = rsa.RSAPublicNumbers(exponent, modulus).public_key(default_backend())

pem = public_key.public_bytes(
    encoding=serialization.Encoding.PEM,
    format=serialization.PublicFormat.SubjectPublicKeyInfo
)

print(pem.decode())

Run the script to generate a valid RSA public key in PEM format:

python convert_jwks_to_pem.py > public_key.pem

Verifying the Public Key

To confirm the validity of the generated public key, run:

openssl rsa -in public_key.pem -pubin -text

If successful, you should see details about the RSA key structure.

Using the Public Key in Laravel

1️⃣ Store the Public Key in .env

JWT_PUBLIC_KEY="-----BEGIN PUBLIC KEY-----
MIIBIjANBgkqh...
-----END PUBLIC KEY-----"

2️⃣ Update Laravel Configuration (config/auth.php)

'jwt_public_key' => env('JWT_PUBLIC_KEY'),

3️⃣ Modify JWT Verification in Laravel

Modify your controller to load the correct key:

use Firebase\JWT\JWT;
use Firebase\JWT\Key;

$publicKey = config('auth.jwt_public_key');
$decoded = JWT::decode($token, new Key($publicKey, 'RS256'));

Conclusion

By following these steps, you can successfully extract, verify, and use an RSA public key for JWT authentication in Laravel. This ensures secure and correct token verification in your application.

Let me know in the comments if you have any questions or need further clarification! 🚀

Linux find: Find Files in a Folder That Changed Today

Linux find: Find Files in a Folder That Changed Today

The find command in Linux is a powerful tool for searching files and directories based on various criteria, including modification time. If you need to find files in a specific folder that were modified today, you can use the -mtime option.

Basic Command

To list all files in a directory that were modified within the last 24 hours, run:

find /path/to/search -mtime -1

How It Works

  • find – The command used to search for files and directories.
  • /path/to/search – Replace this with the directory where you want to perform the search.
  • -mtime -1 – Finds files modified within the last 24 hours.

Understanding -mtime Values

  • -mtime 0 → Finds files modified today (since the last midnight).
  • -mtime -1 → Finds files modified in the last 24 hours.
  • -mtime +1 → Finds files modified more than a day ago.

Include Subdirectories

By default, find searches recursively within all subdirectories. If you want to restrict the search to the current folder only, use:

find /path/to/search -maxdepth 1 -mtime -1

Filtering by File Type

To find only files (excluding directories):

find /path/to/search -type f -mtime -1

To find only directories:

find /path/to/search -type d -mtime -1

Sorting Results

If you want to sort the results by modification time (newest first), you can combine find with ls:

find /path/to/search -mtime -1 -type f -exec ls -lt {} +

Finding Files Modified in the Last X Hours

If you need more precision (e.g., finding files modified within the last 6 hours), use the -mmin option:

find /path/to/search -mmin -360

(360 minutes = 6 hours)

Executing a Command on Found Files

To delete files modified within the last 24 hours, use:

find /path/to/search -mtime -1 -type f -delete

⚠️ Be careful with -delete—there is no undo!

Alternatively, to compress the found files:

find /path/to/search -mtime -1 -type f -exec tar -czf modified_today.tar.gz {} +

Conclusion

The find command is an essential tool for system administrators and developers who need to locate and manage recently modified files efficiently. Whether you’re looking for logs, recent uploads, or system changes, these techniques will help streamline your workflow.


Would you like me to add a troubleshooting section or more examples? 🚀