I make extensive use of Laravel Debugbar to track performance of parts of my application. I sprinkle calls to Debugbar::startMeasure and Debugbar::stopMeasure to track the duration of certain segments of my code. However, when this code goes into production, this dependency isn't present. This cause the code to break since it cannot find Debugbar anymore.

To solve this issue, I thought I would create a dummy Debugbar class and have it added as an alias, so that any code depending on Debugbar would still work, but end up as a "no operation". I found the article Dynamic class aliases in package which introduced the necessary piece of information to accomplish this.

<?php

use Illuminate\Foundation\AliasLoader;
use My\SuperPackage\FooBar;

class ServiceProvider extends \Illuminate\Support\ServiceProvider
{
    public function register()
    {
        $this->app->booting(function() {
            $loader = AliasLoader::getInstance();
            $loader->alias('FooBar', FooBar::class);
        });
    }
}

In my desired use case, I simply implemented the following changes:

In app/Providers/DebugbarServiceProvider.php (a new file)

<?php

namespace App\Providers;

use Illuminate\Foundation\AliasLoader;
use Illuminate\Support\ServiceProvider;

class DebugbarServiceProvider extends ServiceProvider
{
    public function register()
    {
        if (!class_exists('Debugbar')) {
            $loader = AliasLoader::getInstance();
            $loader->alias('Debugbar', NullDebugbar::class);
        }
    }
}

class NullDebugbar
{
    public static function __callStatic(string $name, array $arguments)
    {
        // Do nothing
    }
}

In app/config/app.php

    // under the 'providers' key, add
    'providers' => [
        [...]
        // This will take care of loading the service provider defined above
        App\Providers\DebugbarServiceProvider::class,
    ],

With those two changes, it is now possible to make use of Debugbar in most places and have it work even without the Laravel Debugbar dependency installed.

01 Mar 2015

PHP Startup Internals

History / Edit / PDF / EPUB / BIB / 6 min read (~1021 words)
programming php startup internals
  • PHP
  • MySQL
  • Jenkins
  • Apache/Nginx
  • Linux (Ubuntu)
  • node.js/io.js

My goal with this post (and any subsequent posts) is to share my thoughts and current practices on the topic of developing PHP applications in a startup environment.

Starting a new startup means making decisions. Which framework to choose, what tool to use, which programming language, what task should be done before this other task, etc.

Starting is often overwhelming. What should be done first? If we ignore all the questions about the business (what sector? any specific niche? what sort of product?), then the first thing that an individual or a team should aim for is to prepare for iteration.

Many would start by working directly on their first project. It makes sense since it is the primary goal of your startup to produce results. However, writing code without establishing some sort of workflow framework will be inefficient.

My first step is generally to setup Jenkins, a continuous integration tool. It allows me to setup automated testing and automated deployment to a development/staging area/environment. This is useful for two purposes:

  1. Having an external "party" execute the test in their own environment (separate from mine). This validates that whatever is in source control will work on someone else computer.
  2. It deploys automatically "stable" (in the sense that they pass testing) version to an online facing server. With automated deployment, it is possible for me to keep on writing code, have it tested and then deployed to a server where I can ask others to take a look at and provide feedback.

There are a couple of way to get setup.

Everything will be setup on the same machine. Here is how it basically goes:

  1. Install jenkins
  2. Create two jenkins jobs, project-name-develop which takes care of building the develop branch of your repository and run the tests (basic continuous integration), and project-name-develop-to-development, which will again, build the develop branch of your repository but this time for the purpose of having it available online.

There won't be much to discuss here except a list of plugins that are almost mandatory (either because they make jenkins much more useful or allow you to more quickly diagnose issues).

  • AnsiColor
  • Checkstyle Plug-in
  • Clover PHP plugin
  • Credentials Plugin
  • Duplicate Code Scanner Plug-in
  • GIT client plugin
  • GIT plugin
  • HTML Publisher plugin
  • JDepend Plugin
  • JUnit Plugin
  • Mailer Plugin
  • Matrix Authorization Strategy Plugin
  • Matrix Project Plugin
  • Node and Label parameter plugin
  • Parameterized Trigger plugin
  • Plot plugin
  • PMD Plug-in
  • Self-Organizing Swarm Plug-in Modules
  • Slack Notification Plugin
  • SSH Credentials Plugin
  • SSH Slaves plugin
  • Static Analysis Utilities
  • Throttle Concurrent Builds Plug-in
  • Timestamper
  • Violations plugin
  • xUnit plugin

I'll now go into more details as to what each does.

  1. Pull the latest revision from the repository
  2. Download and update composer (if required)
  3. Install dependencies
    1. bower install
    2. npm install
    3. composer install
  4. Build assets to validate they compile
    1. Compile LESS into CSS
    2. Concatenate and minify JS
  5. Prepare the application environment
    1. Migrate database
    2. Seed database
  6. Run continuous integration tools to assert code quality
    1. phpunit
    2. phploc
    3. pdepend
    4. phpmd
    5. phpcs
    6. phpcpd

An iterative cycle here should take less than 5 minutes (and a maximum of 30 minutes). The goal is to quickly know after pushing changes to your repository that nothing is broken.

For this to work, you simply need to make a symbolic link from the jenkins project workspace to some path which apache/nginx makes available to external users. For example

/home/jenkins/workspace/project-a-develop-to-development/public -> /var/www/development/project-a

  1. Pull the latest revision from the repository
  2. Download and update composer (if required)
  3. Install dependencies
    1. bower install
    2. npm install
    3. composer install
  4. Build/Prepare website
    1. Compile LESS into CSS
    2. Concatenate and minify JS
  5. Prepare the application environment
    1. Migrate database
    2. Seed database

An iterative cycle here should take less than 5 minutes. Anything that takes longer than that would be suspicious.

Now that you have both projects setup, here's how things work. First, project-name-develop is triggered every 1-5 minutes and checks the repository for changes. If changes are detected, a build starts and will verify that the current state of the code is valid.

Once the build finishes, if it is successful, projecy-name-develop-to-development will start (triggered on project-name-develop success). It will deploy the stable code so that users may test it.

A whole change cycle will generally take from 1 to 30 minutes depending on how many tests you have and how well you've been able to optimize your jenkins build workflow.

Here's a list of things to try/check:

  • If you are running phpunit with code coverage, disable it and run it in a separate jenkins project. Code coverage is 2-5x slower than without it. When you are running the tests, you want to know the results fast and code coverage should not be a priority. Speed is the priority.
  • If you are running tests against a database and the tests requires setting up and tearing down the database (either just truncating the tables or full DROP tables), search for ways to avoid hitting the database or how to improve performance. For example, if you are testing using SQLite, run an initial database migration and seeding and copy the resulting .sqlite file so that it can be copied on test setup instead of migrating/seeding every time.
  • If migrating/seeding takes a long time, keep the resulting .sqlite file and only rebuild it if its source files (dependencies) have changed. On a project, you will run tests much more often than you will be rebuilding the .sqlite file, so it is worth investing in developing such a tool.
  • Since php is single threaded, look for tools that will enable you to do multi-process php testing. An example of such tool is liuggio/fastest. Depending on the number of processors/cores you have available, you could see a 4-8x gain in speed.
  • If you have the money/hardware, distribute testing over many machines. If you want a unified phpunit code coverage/results, you can use phpcov to merge separate test results into a single result file.

I was trying to get PHPUnit to give me some coverage report for a project I had not worked on for a long time. I had received a github pull request from someone and I wanted to see what the coverage was on the project to see if the submitter had done a good job of covering his code, but I couldn't get PHPUnit to generate a report that contained any data. All I would get was a couple of empty directory folders pages, which was useless.

I had code coverage work on another project in the same environment I was in, so I was pretty sure that my problem had to do either with how I had setup PHPUnit for that particular project, or that something else was interfering with the report generation.

I tried a couple of things, starting by calling phpunit from the command line using different arguments:


--coverage-html report test\symbol_test.php

Would generate some report with data in it, good!


-c test\phpunit.xml (logging set in phpunit.xml)

Would generate an empty report, not good...


--coverage-html report -c test\phpunit.xml

Would generate an empty report, not good...

So at that point I saw that it was working correctly and that something was definitely wrong with my phpunit.xml configuration file. I went back to the phpunit.de manual, specifically on the configuration page, and tried to figure out my problem.

For code coverage to be included in your report, you have to add a filter, be it a blacklist or a whitelist, but you have to have a filter.

So I quickly added a filter such as


<filter>
    <whitelist>
        <directory>../</directory>
    </whitelist>
</filter>

which would whitelist everything that is in the project (my project root is one level above test). Ran phpunit in the test folder and I finally got a report with data!

A friend of mine was trying to install PHPUnit on his mac (OS X Lion), but unfortunately, he got stuck during the process.

At some point, he was faced with this error being displayed. We both looked at the problem and first made sure that said file existed. It was the case, weird...

A bit of googling will reveal that this is frequently fixed by appending the path to the pear folder to your php include_path (defined in your php.ini). But in his case, that was already done and it still wasn't working.

Next up was to check permissions problem. Not having read permissions on the file would of been an easy one to fix, but again, this was not the cause of the problem.

At this point you might be asking yourself, how come the file exists, you have permission to read it, but yet, you can't...

Well, the actual problem was that his OS was set up such that the maxfiles was set to 256 and PHPUnit had already reached that amount of open files.

To check if you have the same problem, try running phpunit using sudo dtruss -f -t open phpunit (Linux users will want to use strace -e open phpunit). In your output, you should see the files being opened. At some point, you'll find Err#24, which indicates "To many file descriptors opened". If you have this problem, then the following should help you fix it.

The solution to this problem is quite easy. What you'll want to do is increase the maximum number of descriptors that can be used by a process. You can do it temporarily with ulimit -S -n 1024 (to use 1024 instead of 256). Another way is to edit it and keep those settings (until you change them again) is to use launchctl limit maxfiles 1024 unlimited.

I've been using PHPUnit recently to test a Kohana application I'm developing as my last semester project for my bachelor's degree.

At some point during the development, code coverage generation decided to stop working on my desktop (my remote CI still had no problem).

I started diagnosing the problem, being on Windows, I thought it could be due to the "poor job" I had done on installing php, pear and phpunit. I didn't want to go through the trouble or reinstalling everything though and just did the minimum: uninstall and reinstall phpunit. No success at that point.

I decided to go back a week or two in my SVN revisions, have it generate code coverage and get to the point were code coverage generation would fail. Took around 2 SVN "update to" to get to that point. After that, I tried updating the tests, but the new tests were using new features. So I updated the code first, then started updating the test files one by one. After a couple of files, I hit an interesting message:

ErrorException [ 1 ]: Allowed memory size of 134217728 bytes exhausted (tried to allocate 543278 bytes) ~ C:\php\pear\Text\Template.php [ 134 ]

I never had that message show up before, which is kind of strange. I would have expected PHP to tell me that same message everytime it tried to generate the documentation but couldn't...

So, quick fix was for me to edit my php.ini so that the memory_limit = 256M instead of 128M.