🧭 What is Symfony? And How Is It Different from Laravel?

🧭 Symfony is a powerful open-source PHP framework focused on stability, flexibility, and reusable components. It’s widely used to build complex, scalable web applications, enterprise systems, or RESTful APIs. At its core, Symfony is a component-based architecture — and many of its components are actually used by other frameworks, including… Laravel.

Yes, you read that right: Laravel is built on top of Symfony components like HttpFoundation, Routing, Console, EventDispatcher, and more. But despite that, there are several major differences between the two.


🔍 Key Differences Between Symfony and Laravel

1. Architecture Philosophy

  • Symfony follows a flexible and modular approach — you choose exactly which components you want to use.
  • Laravel comes with most features "batteries-included" — everything is ready to go out of the box.

2. Configuration Style

  • Symfony encourages configuration using YAML, XML, or PHP files.
  • Laravel primarily uses PHP config files located in the config/ directory and a simple .env file for environment variables.

3. Project Structure

  • Symfony enforces a stricter project structure, with a strong focus on PSR standards and SOLID principles.
  • Laravel has a simpler, more beginner-friendly structure, though it sacrifices some architectural flexibility.

4. Templating

  • Symfony uses Twig — a secure, fast, and lightweight templating engine.
  • Laravel uses Blade, which is tightly integrated with Laravel’s core and favored by many PHP developers.

5. Strengths and Use Cases

  • Symfony is ideal for large-scale, enterprise-grade applications that demand long-term support (with LTS releases).
  • Laravel is great for rapid development of MVPs, startups, or projects with a quick release cycle.

🎯 Conclusion

Symfony is not a direct replacement for Laravel — it’s simply a different tool for different needs. If you want full control over architecture, advanced service container management, and long-term support — Symfony is your friend. Laravel is all about developer happiness and speed. Symfony is about stability and scale.

In the next post, we’ll dive into installing Symfony and creating your first project. Stay tuned! 😉

Laravel Facades Uncovered: Convenient, but Always Safe?

Facades are a common and convenient practice in Laravel development.
No need to import services manually — just call Cache::get() and go.
But do you really know what’s happening behind the scenes?

In this post, we’ll break it down:

1. How Facades Work Under the Hood

A facade is just a static wrapper around a service in the Laravel Service Container.

Cache::get('key');

is essentially the same as:

app('cache')->get('key');
// or via dependency injection:
$cache->get('key');

So, facades aren’t really static — they resolve real objects from the container dynamically.


2. What’s the Catch?

Facades are convenient, but:

  • Harder to test — mocking with Cache::shouldReceive() or Mail::fake() can be unintuitive;
  • Hidden dependencies — classes don’t explicitly declare what they rely on;
  • Weak IoC principles — swapping implementations becomes more difficult.

3. When Should You Use Dependency Injection Instead?

Prefer DI when you:

  • Want testable, flexible code;
  • Work with multiple implementations of a contract;
  • Need to clearly express a class’s dependencies.
use Illuminate\Contracts\Cache\Repository as Cache;

class UserService 
{
    public function __construct(private Cache $cache) {}

    public function getUser(int $id)
    {
        return $this->cache->remember("user:$id", 60, fn() => User::find($id));
    }
}

4. Refactoring from Facade to Dependency Injection

Before:

class UserService 
{
    public function getUser(int $id)
    {
        return Cache::get("user:$id");
    }
}

After:

use Illuminate\Contracts\Cache\Repository as Cache;

class UserService 
{
    public function __construct(private Cache $cache) {}

    public function getUser(int $id)
    {
        return $this->cache->get("user:$id");
    }
}

TL;DR

Facades are fast and easy to use.
But for scalable, testable, and maintainable architecture — consider dependency injection instead.

Do you use facades in your Laravel projects? Share your thoughts in the comments!

🎧 How to Convert and Combine Multiple iPhone .m4a Audio Files into One .mp3 on Linux using ffmpeg

Have you ever recorded several audio files on your iPhone, only to realize you need them in .mp3 format for easier sharing, editing, or publishing? Here’s a simple and effective way to convert and merge multiple .m4a files into a single .mp3 file using ffmpeg on a Linux system.

🛠️ Step 1: Convert .m4a Files to .mp3
Assume you have three .m4a files saved from your iPhone:

ffmpeg -i 'file-1.m4a' -codec:a libmp3lame -qscale:a 2 2025-05-08_file-1.mp3
ffmpeg -i 'file-2.m4a' -codec:a libmp3lame -qscale:a 2 2025-05-08_file-2.mp3
ffmpeg -i 'file-3.m4a' -codec:a libmp3lame -qscale:a 2 2025-05-08_file-3.mp3

📌 Tip: The -qscale:a 2 flag sets high audio quality. Lower numbers mean better quality (and larger files).

📄 Step 2: Create a Playlist File for Concatenation
You’ll need to prepare a text file that lists the MP3 files in the correct order for merging:

cat mp3_list.txt
file '2025-05-08_file-1.mp3'
file '2025-05-08_file-2.mp3'
file '2025-05-08_file-3.mp3'

🔄 Step 3: Merge the MP3 Files into One
Use ffmpeg with the concat demuxer to join all MP3s into a single file:

ffmpeg -f concat -safe 0 -i mp3_list.txt -c copy 2025-05-08__full_output_combined_3_files.mp3

And voilà — you now have a single MP3 file that combines all three recordings!

🔗 See Also
For more on merging MP3 files with ffmpeg, check out this helpful StackOverflow thread.

How to Merge and Extract PDF Pages on Linux Like a Pro

📄 How to Merge and Extract PDF Pages on Linux Like a Pro

Working with PDF files on Linux is easier than you think. Whether you need to merge multiple PDFs into a single file or extract specific pages (even in reverse order!), Linux has powerful and free tools to get the job done.

Here’s your ultimate guide to merging and extracting PDF pages using the command line.


🔧 Tool of Choice: pdftk

✅ Install pdftk on Ubuntu Linux:

sudo apt update
sudo apt install pdftk

📚 Merge Multiple PDF Files into One

If you have several PDFs and want to combine them:

pdftk file1.pdf file2.pdf file3.pdf cat output merged.pdf

This command will create a new file called merged.pdf containing all pages from file1.pdf, file2.pdf, and file3.pdf in that order.


✂️ Extract Specific Pages from a PDF

Example: Extract pages 1 to 5

pdftk input.pdf cat 1-5 output output_pages_1_to_5.pdf

Extract specific non-consecutive pages:

pdftk input.pdf cat 1 3 5 output selected_pages.pdf

🔁 Extract Pages in Reverse Order

Let’s say you want to extract the first five pages from input.pdf but in reverse order—page 5 becomes 1, page 4 becomes 2, etc.

pdftk input.pdf cat 5 4 3 2 1 output reversed_pages_1_to_5.pdf

🐚 BONUS: Bash Script to Reverse Page Ranges

Want to automate reversing a range of pages? Here’s a neat bash snippet:

START=1
END=5
REVERSED=$(seq $END -1 $START | tr '\n' ' ')
pdftk input.pdf cat $REVERSED output reversed_range.pdf

💡 Alternatives

If you want GUI or extra formatting features, check out:

  • PDF Arranger – GUI tool to merge/reorder visually

    sudo apt install pdfarranger
  • qpdf – powerful CLI tool:

    qpdf input.pdf --pages . 5-1 -- reversed.pdf

🏁 Final Thoughts

With tools like pdftk, qpdf, and pdfarranger, working with PDFs on Linux becomes a breeze. Whether you’re splitting, merging, or reordering pages, there’s no need for paid or proprietary software.

Got a favorite PDF tip or tool on Linux? Drop it in the comments and let’s build an even better toolbox together! 🧰🐧

How to Remove Lines Starting with a Pattern from a File in Linux

How to Remove Lines Starting with a Pattern from a File in Linux

Sometimes, when working with log files, backups, or system-generated data, you may want to delete lines that begin with a specific pattern. Luckily, Linux provides several quick and powerful ways to do this from the command line.

In this article, we’ll explore how to remove lines that start with a certain prefix or string using tools like sed and egrep. These methods are efficient and work well for automation or scripting.

Use Case Example

Let’s say you have a log file with a list of changed files, and you want to remove all lines that start with ./var/cache, which typically includes cached files that don’t need to be tracked in backups or version control.


🔧 Method 1: Using sed

The sed (stream editor) tool is very powerful for text processing. You can delete lines that match a certain pattern using the d command.

sed '/^\.\/var\/cache/d' /home/taras/website_backups/changed_files.log > /home/taras/website_backups/changed_files_cleaned.log

Explanation:

  • ^\.\/var\/cache — Matches lines starting with ./var/cache
  • d — Deletes the matched lines
  • The output is redirected to a new cleaned file

Tip: Always test sed commands on a copy of your file to avoid accidental data loss.


🧰 Method 2: Using egrep (or grep -E)

Another simple and readable method is to use egrep or grep -E with the -v (invert match) option, which excludes matching lines.

egrep -v '^\.\/var\/cache' /home/taras/website_backups/changed_files.log > /home/taras/website_backups/changed_files_cleaned2.log

Or using grep -E:

grep -Ev '^\.\/var\/cache' /home/taras/website_backups/changed_files.log > /home/taras/website_backups/changed_files_cleaned2.log

🔍 How to Verify the Results

You can compare the original and the cleaned file using the diff command:

diff /home/taras/website_backups/changed_files.log /home/taras/website_backups/changed_files_cleaned.log

or

diff /home/taras/website_backups/changed_files.log /home/taras/website_backups/changed_files_cleaned2.log

This will show you exactly what lines were removed.


🧪 More Practical Examples

Remove comment lines starting with #:

sed '/^#/d' config.txt > config_cleaned.txt

Remove empty lines:

sed '/^$/d' notes.txt > notes_no_empty.txt

Remove all lines starting with "DEBUG" in a log:

grep -v '^DEBUG' app.log > app_cleaned.log

🛡️ Pro Tip: In-place Editing with sed

If you want to edit the file directly (be careful!):

sed -i '/^\.\/var\/cache/d' /home/taras/website_backups/changed_files.log

The -i flag tells sed to edit the file in place.


🧠 Conclusion

Whether you’re cleaning up logs, filtering backups, or preprocessing data for scripts, these simple command-line tricks with sed and grep can save you time and effort.

If you’re working with Linux regularly, mastering these tools is a must. They’re versatile, fast, and script-friendly.


📣 Share Your Use Case

Have you used similar commands in your daily work? Share your examples in the comments or drop me a message—I’d love to include more community-driven use cases in future posts!


Keywords for SEO:
Linux remove lines from file, delete lines starting with string Linux, grep exclude pattern, sed delete line, Linux command line tips, clean log files Linux


Would you like a thumbnail or social media caption to promote this post too?

How to Clean Git History from Secrets Using BFG Repo-Cleaner (Ubuntu-Friendly Guide)

Git

🔒 How to Clean Git History from Secrets Using BFG Repo-Cleaner (Ubuntu-Friendly Guide)

Have you ever accidentally committed secrets like database credentials or API keys to your Git repository? Don’t panic — you can clean your repo history and protect your sensitive data.

In this post, I’ll walk you through using BFG Repo-Cleaner to scrub secrets from Git history — with simple step-by-step instructions and working commands for Ubuntu Linux users.


⚠️ Why This Matters

Once a secret is committed, it lives in your Git history. Even if you delete the line or file, it’s still retrievable unless you rewrite the history.


🧰 Tools We’ll Use

  • BFG Repo-Cleaner – a fast alternative to git filter-branch
  • Git
  • Java Runtime (required to run the BFG .jar)

🐧 Step-by-Step Guide (Ubuntu Linux)

✅ Step 1: Install Java

BFG requires Java to run:

sudo apt update
sudo apt install default-jre -y

✅ Step 2: Download BFG Repo-Cleaner

Grab the latest .jar from Maven Central:

wget https://repo1.maven.org/maven2/com/madgag/bfg/1.14.0/bfg-1.14.0.jar -O bfg.jar

You can now run it via:

java -jar bfg.jar --help

✅ Step 3: Clone Your Repository in Mirror Mode

git clone --mirror https://github.com/your-username/your-website.com.git
cd your-website.com.git

The –mirror option ensures we get all refs and branches for complete cleanup.


✅ Step 4: Define Secrets to Remove

Create a secrets.txt file with strings you want to remove from the entire history:

DB_PASSWORD
DB_USERNAME
google_recaptcha_secret
my_old_secret_key

🔐 These can be full strings or patterns. BFG will replace them with [REMOVED].


✅ Step 5: Run BFG

java -jar ../bfg.jar --replace-text secrets.txt

Or to delete sensitive files completely (e.g., .env):

java -jar ../bfg.jar --delete-files .env

✅ Step 6: Cleanup and Optimize

After BFG has done its job, run the following to clean and compact the repo:

git reflog expire --expire=now --all
git gc --prune=now --aggressive

✅ Step 7: Force Push the Cleaned Repo

⚠️ This rewrites history, so collaborators must re-clone the repo afterward.

git push --force

✅ Bonus: Make BFG Globally Available (Optional)

sudo mv bfg.jar /usr/local/bin/bfg.jar
echo 'alias bfg="java -jar /usr/local/bin/bfg.jar"' >> ~/.bashrc
source ~/.bashrc

Now you can run BFG anywhere with:

bfg --help

🔎 How to Verify That Secrets Are Gone

Use Git’s search to confirm:

git log -S'some_secret_string'

🚨 Don’t Forget to Rotate Secrets

Even after removing secrets from history, treat them as compromised and rotate them immediately (update DB users, regenerate API keys, etc.).


📦 Alternatives to BFG

If you’re working on more complex history rewrites, also consider:

[git filter-repo] – the official successor to filter-branch
[git filter-branch] – powerful but slow and error-prone


🤝 Conclusion

Mistakes happen, and leaking secrets in Git is more common than you’d think. Luckily, tools like BFG make it fast and easy to clean your repo and start fresh.

If you found this helpful, share it with your team or developer community — and let’s keep our code (and secrets) safe. 🛡️

How to Upload a Local Bare Git Repository to GitHub and Make It Public

🚀 How to Upload a Local Bare Git Repository to GitHub and Make It Public

Sometimes, during your development workflow, you might have a bare Git repository stored locally — for example, as a central repository for multiple team members or part of a custom deployment process. But eventually, you might want to push this repository to GitHub, make it public, and possibly open it to contributions or make it easier to browse history online.

In this post, we’ll walk step-by-step through how to upload a local bare Git repository to GitHub, including pushing all branches and tags.


🧠 What Is a Bare Git Repository?

A bare Git repository is a repository without a working directory. It only contains the .git folder contents — that is, the Git object database, references, and configuration. You usually don’t work directly inside a bare repo (no editing files or running builds), but it’s often used for remote storage or central version control.

You can recognize it by its structure and the lack of a working tree. Its remote might look like this:

$ git remote -v
origin  file:///home/username/your-website.git (fetch)
origin  file:///home/username/your-website.git (push)

🛠️ Step-by-Step Guide

1. Create a New Repository on GitHub

Head over to https://github.com/new and:

  • Set the repository name (e.g. your-website)
  • Choose Public
  • Do not initialize with a README, .gitignore, or license — we’ll be pushing our own history

Click Create repository to get your new GitHub repo.


2. Add GitHub as a Remote in Your Bare Repo

Navigate to the folder containing your bare repo:

cd /home/username/your-website.git

Now add the GitHub URL as a new remote (call it github):

git remote add github https://github.com/your-username/your-website.git

🔐 If you’re using SSH instead of HTTPS, you can use:

git remote add github git@github.com:your-username/your-website.git

3. Push Everything to GitHub

The safest and most complete way to upload your entire repository — including all branches, tags, and history — is using the --mirror option:

git push --mirror github

This command is similar to --all, but it also includes references like tags and remote-tracking branches.


4. Verify on GitHub

Go back to your repository page on GitHub, and you should see:

  • All your commits
  • All your branches
  • Any tags you may have created

Congrats! 🎉 Your local bare repository is now on GitHub and public!


🧩 Bonus Tips

Add a README

You might want to add a README.md to introduce the purpose of the project. You can clone the repo into a working directory and commit the README from there.

git clone https://github.com/your-username/your-website.git
cd your-website
echo "# Your Website" > README.md
git add README.md
git commit -m "Add README"
git push origin main

Add a .gitignore

Add a .gitignore to clean up the repo and avoid committing unnecessary files. You can use GitHub’s gitignore templates as a base.

Setup GitHub Actions (CI/CD)

Once your project is public, GitHub Actions is a powerful tool for automated testing, deployment, or code checks. You can add a .github/workflows directory with workflow YAML files for different CI tasks.


🎯 Conclusion

Uploading a local bare Git repository to GitHub is a straightforward but powerful step — whether you’re archiving, collaborating, or going open-source. By using --mirror, you preserve every aspect of your local Git history.

Have questions about CI/CD, GitHub Pages, Laravel deployment, or PHP best practices? Feel free to connect or drop a comment!


Would you like a Markdown version for posting to your blog, Reddit, or GitHub?

How to Integrate GitLab Cloud with Slack for Real-Time Notifications

Integrating GitLab Cloud with Slack can significantly enhance your development workflow by providing real-time notifications about commits, merge requests, pipeline statuses, and other repository activities. In this guide, we’ll walk through the process of setting up GitLab Cloud to send messages to Slack whenever important events occur.


Why Integrate GitLab with Slack?

With GitLab-Slack integration, you can:

  • Get real-time alerts on repository activities.
  • Improve team collaboration with instant updates.
  • Monitor pipeline statuses to track CI/CD workflows.
  • Stay informed about merge requests and commits without leaving Slack.

Step-by-Step Guide to GitLab-Slack Integration

Step 1: Enable Slack Integration in GitLab Cloud

  1. Log in to your GitLab Cloud account.
  2. Navigate to the project you want to integrate.
  3. Go to SettingsIntegrations.
  4. Scroll down and find Slack Notifications.

Step 2: Generate a Slack Webhook URL

To allow GitLab to send messages to Slack, you need to set up a webhook:

  1. Open Slack and go to your workspace.
  2. Click on your workspace name (top left corner) → Settings & AdministrationManage Apps.
  3. Search for "Incoming WebHooks" and select Add to Slack.
  4. Choose a Slack channel where GitLab notifications should appear (e.g., #git-updates).
  5. Click Add Incoming WebHooks integration.
  6. Copy the Webhook URL that Slack generates.

Step 3: Configure GitLab to Use the Webhook

  1. Return to the Slack Notifications settings in GitLab.
  2. Paste the Webhook URL into the provided field.
  3. Choose which events should trigger Slack notifications:
    • Push events (code commits)
    • Issue events (new issues, updates, or closures)
    • Merge request events (approvals, rejections, and updates)
    • Pipeline events (CI/CD status updates)
    • Tag push events (new releases or versions)
    • Wiki page events (if using GitLab Wiki)
  4. Click Save Changes.

Step 4: Customize Notifications (Optional)

If you need more control over what gets sent to Slack, consider these options:

  • Modify the Slack Webhook settings in GitLab.
  • Use Slack slash commands (e.g., /gitlab subscribe) to manage notifications.
  • Set up Slack workflows to format and filter messages for better clarity.

Testing the Integration

Once the setup is complete, test the integration by performing one of the following actions:

  • Commit a change to your GitLab repository.
  • Create a merge request.
  • Run a pipeline.

If everything is configured correctly, you should see a message in your Slack channel confirming the event.


Final Thoughts

Integrating GitLab with Slack streamlines communication and ensures that your team stays up-to-date on project progress. By following these steps, you can optimize your workflow and enhance team collaboration with real-time GitLab notifications in Slack.

🚀 Now it’s your turn! Try this setup and let us know how it improves your development workflow!


If you found this guide helpful, feel free to share it with your developer community! 🔥

Measuring HTTP Request Time with cURL in Linux

Measuring HTTP Request Time with cURL in Linux

When testing web application performance, one of the most useful tools at your disposal is curl. This command-line tool allows developers to measure request times, analyze response latency, and debug performance bottlenecks efficiently.

In this post, we’ll explore how you can use curl to measure HTTP request time, break down various timing metrics, and optimize your API calls for better performance.

Basic Usage: Measure Total Request Time

If you simply want to check how long a request takes from start to finish, use:

curl -o /dev/null -s -w "Time taken: %{time_total}s\n" https://example.com

Explanation:

  • -o /dev/null: Prevents output from being printed to the terminal.
  • -s: Runs in silent mode, hiding progress details.
  • -w "Time taken: %{time_total}s\n": Displays the total request time.

Detailed Timing Breakdown

If you’re debugging slow requests, you may want to break down the request into different phases:

curl -o /dev/null -s -w "Time Lookup: %{time_namelookup}s\nTime Connect: %{time_connect}s\nTime StartTransfer: %{time_starttransfer}s\nTotal Time: %{time_total}s\n" https://example.com

Key Metrics:

  • time_namelookup: Time taken to resolve the domain name.
  • time_connect: Time taken to establish a TCP connection.
  • time_starttransfer: Time until the server starts sending data.
  • time_total: Total time taken for the request.

Saving Results to a Log File

To store request timing data for analysis, append output to a log file:

curl -o /dev/null -s -w "%{time_total}\n" https://example.com >> perf_log.txt

Automating Multiple Requests

If you want to test multiple requests and analyze response times:

for i in {1..5}; do curl -o /dev/null -s -w "Request $i: %{time_total}s\n" https://example.com; done

This will send five requests and print the total time for each.

Comparing HTTP vs. HTTPS Performance

To compare response times for an API running over HTTP and HTTPS:

curl -o /dev/null -s -w "HTTP Time: %{time_total}s\n" http://example.com
curl -o /dev/null -s -w "HTTPS Time: %{time_total}s\n" https://example.com

You might notice HTTPS takes slightly longer due to encryption overhead.

Using cURL with Proxy for Network Debugging

If you’re testing behind a proxy, you can measure request times using:

curl -x http://proxy.example.com:8080 -o /dev/null -s -w "Total Time: %{time_total}s\n" https://example.com

Final Thoughts

Understanding HTTP request timing is crucial for optimizing API response times and diagnosing performance bottlenecks. By leveraging curl‘s timing metrics, developers can effectively analyze and improve web application performance.

Do you use curl for performance testing? Share your experiences in the comments below!

Ignoring Local Changes to Files in Git

Ignoring Local Changes to Files in Git: The Power of --assume-unchanged

Introduction

As developers, we often work on projects where certain files, like composer.lock, are frequently updated locally but should not be committed to the repository. However, adding these files to .gitignore might not be the best approach, especially if they need to be tracked in the repository but ignored temporarily.

This is where Git’s --assume-unchanged flag comes in handy! In this blog post, we’ll explore what it does, when to use it, and how to revert the changes when needed.

What Does git update-index --assume-unchanged Do?

The command:

 git update-index --assume-unchanged composer.lock

Explanation:

  • git update-index is a low-level Git command that modifies the index (staging area).
  • --assume-unchanged tells Git to mark the file as "unchanged" in the working directory.

Effect of Running This Command:

  • Git stops tracking modifications to the specified file (composer.lock in this case).
  • If you edit composer.lock, Git won’t detect the changes and won’t include them in future commits.
  • The file remains in the repository, but any local modifications stay untracked.

When Should You Use --assume-unchanged?

This feature is useful in the following scenarios:

  • You have local environment-specific changes in a file (like composer.lock or .env) that you don’t want to commit but also don’t want to ignore permanently.
  • You are working on a project where certain configuration files keep changing, but you don’t want those changes to show up in git status every time.
  • You need a temporary workaround instead of modifying .gitignore or creating a separate local branch.

How to Check If a File Is Marked as --assume-unchanged?

To check whether a file has been marked with --assume-unchanged, use:

 git ls-files -v | grep '^h'

Files marked with an h are assumed to be unchanged.

Reverting the --assume-unchanged Status

If you later decide that you want Git to track changes to the file again, use:

 git update-index --no-assume-unchanged composer.lock

This command removes the "assume unchanged" flag, allowing Git to detect modifications as usual.

Important Considerations

  • This does not remove the file from version control; it only affects local modifications.
  • Other developers won’t be affected by this command—it’s purely a local setting.
  • If you pull new changes from a remote repository that modify the file, you may experience conflicts.

Alternative Approach: .git/info/exclude

If you want to ignore a file only for yourself, without modifying .gitignore, you can add it to .git/info/exclude:

 echo 'composer.lock' >> .git/info/exclude

This works similarly to .gitignore but applies only to your local repository.

Conclusion

Using git update-index --assume-unchanged is a great way to temporarily ignore changes to tracked files in Git. It’s particularly useful for developers working on projects where some files change frequently but shouldn’t be committed every time.

Next time you’re tired of seeing local changes cluttering your git status, try this command and make your workflow cleaner!

Have You Used This Command Before?

If you have any tips or experiences using --assume-unchanged, share them in the comments! 🚀