Loading...

Follow Matthew Weierophinney on Feedspot

Continue with Google
Continue with Facebook
or

Valid

Last week, I did some system updates, and then decided to compile the most recent PHP releases. I've used phpbrew to manage multiple PHP releases for a number of years, and having it install a new version is fairly routine.

Except this time, it wasn't. Due to updates I installed, I was getting errors first with compiling the GD extension, then with ext-intl:

  • If you want Freetype support in ext-gd, you are expected to install the package libfreetype-dev. On Ubuntu, this now installs libfreetype6-dev, which no longer includes the freetype-config binary that PHP's configure script uses to determine what features it supports.

  • Similarly, ext-intl depends on the package libicu-dev. Ubuntu's package now omits the icu-config binary used by PHP to determine feature support.

I searched for quite some time to find packages that would resolve these problems. I could have found the source code and compiled it and linked to that, but that would mean keeping that up-to-date on top of my PHP installs.

I even looked in the ondrej/php PPA, as that repository has multiple PHP versions already, including source packages.

And then I thought: why not try using those instead of phpbrew?

The rest of this post is how I made that work.

I use Ubuntu for my operating system. The instructions I present here should work on any Debian-based system, but your mileage may vary. If you are using an RPM-based system, yum will be your friend, but I have no idea how to add repositories in that system, nor if update-alternatives is available. As such, these instructions may or may not help you.

Which is okay. I mainly wrote them to help future me.

Register the PPA

First, I had to add the PPA to the system:

$ sudo add-apt-repository ppa:ondrej/ppa

Then the usual:

$ sudo apt update
Approach to installation

I first identified the extensions I typically install, matched them to packages in the PPA, and made a list. I knew I'd be installing the same extensions across all PHP versions I wanted, so I figured I could script it a bit.

From there, I executed the following from the CLI:

$ for VERSION in 5.6 7.0 7.1 7.2 7.3;do
for> for EXTENSION in {listed all extensions here};do
for for> sudo apt install php${VERSION}-${EXTENSION}
for for> done
for> done

This grabbed and installed each PHP I needed along with all extensions I wanted.

Switching between versions

To switch between versions, you have two options:

  • Use the version-specific binaries: /usr/bin/php5.6, /usr/bin/php7.0, etc.

  • Set a default via update-alternatives:

    $ sudo update-alternatives --set php /usr/bin/php5.6
    

    If you're not sure what you have installed, use:

    $ sudo update-alternatives --config php
    

    which will give you a listing, and an ability to select the one to use.

Rootless alternatives

What if you'd rather not be root to switch the default version, though? Fortunately, update-alternatives allows specifying alternate config and admin directories.

Define the following alias in your shell's configuration:

alias update-my-alternatives='update-alternatives \
 --altdir ~/.local/etc/alternatives \
 --admindir ~/.local/var/lib/alternatives'

Additionally, make sure you add $HOME/.local/bin to your $PATH; since defining $PATH varies based on the shell you use, I'll leave that for you to accomplish.

If you open a new shell, the alias will now be available; alternately, source the file in which you defined it to have it take effect immediately.

Once you've done that, you can run the following, based on the PHP versions you've installed:

$ for VERSION in 5.6 7.0 7.1 7.2 7.3;do
for> update-my-alternatives --install $HOME/.local/bin/php php /usr/bin/php${VERSION} ${VERSION//./0}
for> done

This will create alternatives entries local to your own user, prioritizing them by version; as a result, the default, auto-selected version will be the most recently installed.

You can verify this by running update-my-alternatives --config php:

There are 5 choices for the alternative php (providing $HOME/.local/bin/php).

  Selection    Path             Priority   Status
------------------------------------------------------------
* 0            /usr/bin/php7.3   703       auto mode
  1            /usr/bin/php5.6   506       manual mode
  2            /usr/bin/php7.0   700       manual mode
  3            /usr/bin/php7.1   701       manual mode
  4            /usr/bin/php7.2   702       manual mode
  5            /usr/bin/php7.3   703       manual mode

Press <enter> to keep the current choice[*], or type selection number:

To switch between versions using the alias:

  • Switch to a specific, known version:

    $ update-my-alternatives --set php /usr/bin/php{VERSION}
    
  • Switch back to the default version (version with highest priority):

    $ update-my-alternatives --auto php
    
  • List available versions:

    $ update-my-alternatives --list php
    
  • Interactively choose a version when you're not sure what's available:

    $ update-my-alternatives --config php
    

The above was cobbled together from:

  • https://serverfault.com/a/811377
  • https://williamdemeo.github.io/linux/update-alternatives.html
PECL

Compiling and installing your own extensions turns out to be a bit of a pain when you have multiple PHP versions installed, mainly because there is exactly one PECL binary installed.

First, you need to install a few packages, including the one containing PEAR (PECL uses the PEAR installer), and the development packages for each PHP version you use (as those contain the tools necessary to compile extensions, including phpize and php-config):

$ sudo apt install php-pear
$ for VERSION in 5.6 7.0 7.1 7.2 7.3;do
for> sudo apt install php${VERSION}-dev
for> done
Compiling extensions

From there, you need to:

  1. Ensure the correct phpize and php-config are selected.
  2. Install the extension.
  3. Tell PECL to deregister the extension in its own registry.

Normally, you would accomplish the first point by doing the following:

$ sudo update-alternatives --set php /usr/bin/php7.3
$ sudo update-alternatives --set php-config /usr/bin/php-config7.3
$ sudo update-alternatives --set phpize /usr/bin/phpize7.3

Note that the above is not using the update-my-alternatives alias detailed in the previous section. This is because extensions must be installed at the system level.

That said, the above won't be necessary, as I detail below.

However, PECL now has a really nice configuration flag, php_suffix, that allows specifying a string to append to each of the php, phpize, and php-config binary names. So, for example, if I specify pecl -d php_suffix=7.3, the string 7.3 will be appended to those, so that they become php7.3, phpize7.3, and php-config7.3, respectively. This ensures that the correct scripts are called during the build process, and that the extension is installed to the correct location.

As for the last point in that numbered list, it's key to being able to install an extension in multiple PHP versions; otherwise, each subsequent attempt, even when using a different PHP version, will result in PECL indicating it's already installed. The -r switch tells PECL to remove the package from its own registry, but not to remove any build artifacts.

As a complete example:

$ sudo pecl -d install php_suffix=7.3 swoole && sudo pecl uninstall -r swoole
Registering extensions

From there, you still have to register, and optionally configure, the extension. To do this, drop a file named after the extension in /etc/php/${PHP_VERSION}/mods-available/${EXTENSION}.ini, with the following contents:

; configuration for php ${EXTENSION} module
; priority=20
extension=${EXTENSION}.so

Now that this is in place, enable it:

$ sudo phpenmod -v ${PHP_VERSION} -s cli ${EXTENSION}

(To disable it, use phpdismod instead.)

Thoughts

When thinking in terms of phpbrew:

  • Like phpbrew, you can temporarily choose an alternative PHP binary by simply referring to its path. With phpbrew, this would be something like ~/.phpbrew/php/php-{PHP_VERSION}/bin/php. With the ondrej PPA, it becomes /usr/bin/php{PHP_MINOR_VERSION}. (E.g. ~/.phpbrew/php/php-7.2.36/bin/php would become just /usr/bin/php7.2.)

  • There is no equivalent to phpbrew use. That feature would change the symlink only for the duration of the current terminal session. Opening a new terminal session would revert to the previous selection. With update-alternatives, it's all or nothing. I mainly used phpbrew use to ensure my default PHP did not change in case I forgot to call it again.

  • Usage of update-alternatives is more like phpbrew switch, as it affects both the current and all later terminal sessions. Once switched, that selection is in use until you switch it again. This means I have to remember to switch to my default version. However, it's relatively easy to add a line to my shell profile to call update-my-alternatives --auto php.

Basically, if you can use the binary directly, but don't want to use that one as your default, refer to it by absolute path. If you are using a command that will use the current environment's PHP, use update-my-alternatives to switch PHP versions first.

The other issue I see is that if I want to test against a specific PHP version, I'll still need to compile it myself — which leads me back to the original problem that led me here in the first place. I'll cross that bridge when I get to it. Until then, I have a workable solution — and finally a single document I can refer to when I need to remember again at a later date, instead of cobbling it together from multiple sources!

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Ten years ago this month, I was involved in a couple of huge changes for Zend Framework.

First, I helped spearhead integration of the JavaScript library Dojo Toolkit into Zend Framework, and finalized the work that month. I'd worked closely with the two developers who had been leading that project at the time, and one thing that came up during our discussions was that they had helped create an open source foundation for the project, to ensure its continuity and longevity, and to ensure the project can outlive the ups and downs of any commercial company. This idea intrigued me, and has stuck in the back of my mind ever since.

The other thing that happened that month was that I was promoted to Project Lead of Zend Framework. I've held that position ever since.

Today, I get to announce another change: Zend Framework is transitioning to an open source project under the Linux Foundation!

How Zend Framework Becomes Laminas

As I noted, I've been thinking about this for 10 years now, and actually doing research and trying to figure out a way to make it happen for almost two years. When my employer announced some restructuring of the Zend portfolio last fall, moving the project to a foundation was foremost on my mind.

So, imagine my surprise when an old PHP friend, John Mertic, reached out to me and offered the assistance of the Linux Foundation!

I had no idea that this was even a possibility. But, as it turns out, the mission of the foundation is to help create sustainable open source communities, and the primary way they do that is to help create foundations for projects. The beauty is that they take care of the business stuff that developers like myself don't have expertise in: legal issues, taxes, bookkeeping, and even help with things like marketing. They do this so that those of us working on open source projects can focus on our communities and our code. It's an absolutely perfect scenario for the project.

Over the last few months, I've worked with Rogue Wave (my employer) and the Linux Foundation to work out the logistics of this transition, including coming up with some initial budgets, helping flesh out a governance model, and identifying potential founding members. I've also worked with the Zend Framework community review team to come up with a name for the project, and work out some of the technical details for migrating both the project and its users.

So, please say a warm hello to the Laminas Project.

What we announced today is just a beginning. The project is not yet operational. We're still working on tooling for migrating the project and its users, and, more importantly, recruiting more founding members. If your company is interested, please fill out our form, and we'll get back to you to discuss the details.

Acknowledgments

As a parting note, I need to acknowledge a number of people who helped me through the last few months:

  • My wife, Jen, who has been my chief sounding board, and helped me keep my sanity while I juggle meetings, emails, and growing task lists. Love you!
  • Enrico Zimuel, who has been my co-worker, confidante, and friend for years, and continued to help even when he left Rogue Wave last month. I'm excited for this new chapter!
  • The various folks in the Zend Professional Services team, for letting me bounce ideas off of them and occasionally voice my frustrations. You all know who you are!
  • The entire Zend Framework community review team: Rob, Gary, Marco, Frank, James, Evan, Adam, Aleksei, Andreas, Ben, Geert, Ryan, Michał, Michael, and Mike (yes, those last three are all different people!). In particular, Michał has been putting in crazy hours working on migration tooling, and Frank just this morning sent over changes for the Laminas website for me to incorporate!
  • John Mertic and Michael Dolan of the Linux Foundation for holding my hand through this entire process and helping make it happen. We still have work to do, but even getting this far feels like a huge accomplishment, and I couldn't have done it without your support.

If all goes to plan, I'm hoping we'll be announcing the project is operational in the next few months; keep an eye on the ZF and Laminas websites and twitter handles for updates!

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

I am a long-time gnome-shell user. I appreciate the simplicity and elegance it provides, as I prefer having a minimalist environment that still provides me easy access to the applications I use.

That said, just as with any desktop environment, I've still run into problems now and again. One that's been plaguing me since at least the 18.04 release is with display of app indicators, specifically those using legacy system tray APIs.

Normally, gnome-shell ignores these, which is suboptimal as a number of popular programs still use them (including Dropbox, Nextcloud, Keybase, Shutter, and many others). To integrate them into Gnome, Ubuntu provides the gnome-shell extension "kstatusnotifieritem/appindicator support" (via the package gnome-shell-extension-appindicator). When enabled, they show up in your gnome-shell panel. Problem solved!

Except that if you suspend your system or lock your screen, they disappear when you wake it up.

Now, you can get them back by hitting Alt-F2, and entering r (for "restart") at the prompt. But having to do that after every time you suspend or lock is tedious.

Fortunately, I recently came across this gem:

$ sudo apt purge indicator-common

This removes some packages specific to Ubuntu's legacy Unity interface that interfere with how appindicators are propagated to the desktop. Once I did this, my appindicators persisted after all suspend/lock operations!

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

In Expressive, we have standardized on a file named config/routes.php to contain all your route registrations. A typical file might look something like this:

declare(strict_types=1);

use Zend\Expressive\Csrf\CsrfMiddleware;
use Zend\Expressive\Session\SessionMiddleware;

return function (
    \Zend\Expressive\Application $app,
    \Zend\Expressive\MiddlewareFactory $factory,
    \Psr\Container\ContainerInterface $container
) : void {
    $app->get('/', App\HomePageHandler::class, 'home');

    $app->get('/contact', [
        SessionMiddleware::class,
        CsrfMiddleware::class,
        App\Contact\ContactPageHandler::class
    ], 'contact');
    $app->post('/contact', [
        SessionMiddleware::class,
        CsrfMiddleware::class,
        App\Contact\ProcessContactRequestHandler::class
    ]);
    $app->get(
        '/contact/thank-you',
        App\Contact\ThankYouHandler::class,
        'contact.done'
    );

    $app->get(
        '/blog[/]',
        App\Blog\Handler\LandingPageHandler::class,
        'blog'
    );
    $app->get('/blog/{id:[^/]+\.html', [
        SessionMiddleware::class,
        CsrfMiddleware::class,
        App\Blog\Handler\BlogPostHandler::class,
    ], 'blog.post');
    $app->post('/blog/comment/{id:[^/]+\.html', [
        SessionMiddleware::class,
        CsrfMiddleware::class,
        App\Blog\Handler\ProcessBlogCommentHandler::class,
    ], 'blog.comment');
}

and so on.

These files can get really long, and organizing them becomes imperative.

Using Delegator Factories

One way we have recommended to make these files simpler is to use delegator factories registered with the Zend\Expressive\Application class to add routes. That looks something like this:

namespace App\Blog;

use Psr\Container\ContainerInterface;
use Zend\Expressive\Application;
use Zend\Expressive\Csrf\CsrfMiddleware;
use Zend\Expressive\Session\SessionMiddleware;

class RoutesDelegator
{
    public function __invoke(
        ContainerInterface $container,
        string $serviceName,
        callable $callback
    ) : Application {
        /** @var Application $app */
        $app = $callback();

        $app->get(
            '/blog[/]',
            App\Blog\Handler\LandingPageHandler::class,
            'blog'
        );
        $app->get('/blog/{id:[^/]+\.html', [
            SessionMiddleware::class,
            CsrfMiddleware::class,
            Handler\BlogPostHandler::class,
        ], 'blog.post');
        $app->post('/blog/comment/{id:[^/]+\.html', [
            SessionMiddleware::class,
            CsrfMiddleware::class,
            Handler\ProcessBlogCommentHandler::class,
        ], 'blog.comment');

        return $app;
    }
}

You would then register this as a delegator factory somewhere in your configuration:

use App\Blog\RoutesDelegator;
use Zend\Expressive\Application;

return [
    'dependencies' => [
        'delegators' => [
            Application::class => [
                RoutesDelegator::class,
            ],
        ],
    ],
];

Delegator factories run after the service has been created for the first time, but before it has been returned by the container. They allow you to interact with the service before it's returned; you can configure it futher, add listeners, use it to configure other services, or even use them to replace the instance with an alternative. In this example, we're opting to configure the Application class further by registering routes with it.

We've even written this approach up in our documentation.

So far, so good. But it means discovering where routes are registered becomes more difficult. You now have to look in each of:

  • config/routes.php
  • Each file in config/autoload/:
    • looking for delegators attached to the Application class,
    • and then checking those to see if they register routes.
  • In config/config.php to identify ConfigProvider classes, and then:
    • looking for delegators attached to the Application class,
    • and then checking those to see if they register routes.

The larger your application gets, the more work this becomes. Your config/routes.php becomes way more readable, but it becomes far harder to find all your routes.

One-off Functions

In examining this problem for the upteenth time this week, I stumbled upon a solution that is initially acceptable to me, finally.

What I've done is as follows:

  • I've created a function in my ConfigProvider that accepts the Application instance and any other arguments I want to pass to it, and which registers routes with the instance.
  • I call that function within my config/routes.php.

Building on the example above, the ConfigProvider for the App\Blog module now has the following method:

namespace App\Blog;

use Zend\Expressive\Application;
use Zend\Expressive\Csrf\CsrfMiddleware;
use Zend\Expressive\Session\SessionMiddleware;

class ConfigProvider
{
    public function __invoke() : array
    {
        /* ... */
    }

    public function registerRoutes(
        Application $app,
        string $basePath = '/blog'
    ) : void {
        $app->get(
            $basePath . '[/]',
            App\Blog\Handler\LandingPageHandler::class,
            'blog'
        );
        $app->get($basePath . '/{id:[^/]+\.html', [
            SessionMiddleware::class,
            CsrfMiddleware::class,
            Handler\BlogPostHandler::class,
        ], 'blog.post');
        $app->post($basePath . '/comment/{id:[^/]+\.html', [
            SessionMiddleware::class,
            CsrfMiddleware::class,
            Handler\ProcessBlogCommentHandler::class,
        ], 'blog.comment');
    }
}

Within my config/routes.php, I can create a temporary instance and call the method:

declare(strict_types=1);

return function (
    \Zend\Expressive\Application $app,
    \Zend\Expressive\MiddlewareFactory $factory,
    \Psr\Container\ContainerInterface $container
) : void {
    (new \App\Blog\ConfigProvider())->registerRoutes($app);
}

This approach eliminates the problems of using delegator factories:

  • There's a clear indication that a given class method registers routes.
  • I can then look directly at that method to determine what they are.

One thing I like about this approach is that it allows me to keep the routes close to the code that handles them (i.e., within each module), while still giving me control over their registration at the application level.

What strategies have you tried?

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

We pioneered a pattern for exception handling for Zend Framework back as we initially began development on version 2 around seven years ago. The pattern looks like this:

  • We would create a marker ExceptionInterface for each package.
  • We would extend SPL exceptions and implement the package marker interface when doing so.

What this gave users was the ability to catch in three ways:

  • They could catch the most specific exception type by class name.
  • They could catch all package-level exceptions using the marker interface.
  • The could catch general exceptions using the associated SPL type.

So, as an example:

try {
    $do->something();
} catch (MostSpecificException $e) {
} catch (PackageLevelExceptionInterface $e) {
} catch (\RuntimeException $e) {
}

This kind of granularity is really nice to work with. So nice that some standards produced by PHP-FIG now ship them, such as PSR-11, which ships a ContainerExceptionInterface and a NotFoundExceptionInterface.

One thing we've started doing recently as we make packages support only PHP 7 versions is to have the marker ExceptionInterface extend the Throwable interface; this ensures that implementations must be able to be thrown!

So, what happens when you're writing a one-off implementation of something that is expected to throw an exception matching one of these interfaces?

Why, use an anonymous class, of course!

As an example, I was writing up some documentation that illustrated a custom ContainerInterface implementation today, and realized I needed to throw an exception at one point, specifically a Psr\Container\NotFoundExceptionInterface. I wrote up the following snippet:

$message = sprintf(/* ... */);
throw new class($message) extends RuntimeException implements
    NotFoundExceptionInterface {
};

Done!

This works because RuntimeException takes a message as the first constructor argument; by extending that class, I gain that behavior. Since NotFoundExceptionInterface is a marker interface, I did not need to add any additional behavior, so this inline example works out-of-the-box.

What else are you using anonymous classes for?

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

I've been running redis in Docker for a number of sites, to perform things such as storing session data, hubot settings, and more.

I recently ran into a problem on one of my systems where it was reporting:

Can't save in background: fork: Out of memory

A quick google search showed this is a common error, so much so that there is an official FAQ about it. The solution is to toggle the /proc/sys/vm/overcommit_memory to 1.

The trick when using Docker is that this needs to happen on the host machine.

This still didn't solve my problem, though. So I ran a docker ps on the host machine to get an idea of what was happening. And discovered that, somehow, I had two identical redis containers running, using the exact same configuration - which meant they were doing backups to the same volume. Killing the one no longer being used by my swarm services caused everything to work once again.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

I've been working on building PHP Docker images for the purposes of testing, as well as to potentially provide images containing the Swoole extension. This is generally straight-forward, as the official PHP images are well-documented.

This week, I decided to see if I could build Alpine-based images, as they can greatly reduce the final image size. And I ran into a problem.

One of the test-beds I use builds RSS and Atom feeds using zend-feed. When I tried one of these images, I started getting failures like the following:

PHP Warning:  DOMDocument::loadXML(): xmlParseEntityRef: no name in Entity, line: 167 in /var/www/vendor/zendframework/zend-feed/src/Writer/Renderer/Entry/Atom.php on line 404
PHP Fatal error:  Uncaught TypeError: Argument 1 passed to DOMDocument::importNode() must be an instance of DOMNode, null given in /var/www/vendor/zendframework/zend-feed/src/Writer/Renderer/Entry/Atom.php:371
Stack trace:
#0 /var/www/vendor/zendframework/zend-feed/src/Writer/Renderer/Entry/Atom.php(371): DOMDocument->importNode(NULL, 1)
#1 /var/www/vendor/zendframework/zend-feed/src/Writer/Renderer/Entry/Atom.php(53): Zend\Feed\Writer\Renderer\Entry\Atom->_setContent(Object(DOMDocument), Object(DOMElement))
#2 /var/www/vendor/zendframework/zend-feed/src/Writer/Renderer/Feed/Atom.php(91): Zend\Feed\Writer\Renderer\Entry\Atom->render()
#3 /var/www/vendor/zendframework/zend-feed/src/Writer/Feed.php(237): Zend\Feed\Writer\Renderer\Feed\Atom->render()
#4 /var/www/src/Blog/Console/FeedGenerator.php(209): Zend\Feed\Writer\Feed->export('Atom')

During an initial search, this appeared to be a problem due to libxml2 versions, and so I went down a rabbit hole trying to get an older libxml2 version in place, and have all of the various XML extensions compile against it. However, the error persisted.

So, I did a little more sleuthing. I fired up the container with a shell:

$ docker run --entrypoint /bin/sh -it php:7.2-cli-alpine3.8

From there, I used apk to add some editing and debugging tools so I could manually step through some of the code. In doing so, I was able to discover the exact feed item that was causing problems, and, better, get the content it was trying to use.

I realized at that point that the problem was the content — which was being massaged via the tidy extension before being passed to DOMDocument::loadXML(). For some reason, the content generated was not valid XML! (Which is really, really odd, as the whole point of tidy is to produce valid markup!)

I checked the version of ext-tidy, and what version of libtidy it was compiled against, and then checked against the php:7.2-cli image to see what it had, and discovered that while Alpine was using libtidy 5.6.0, the Debian-based image was using 5.2.0. In fact, Ubuntu 18:10 still distributes 5.2.0!

So, I then went on a quest to figure out how to get the earlier libtidy version, and compile the tidy extension against it. This is what I came up with:

# DOCKER-VERSION        1.3.2

FROM php:7.2-cli-alpine3.8

# Compile-time dependencies
RUN echo 'http://dl-cdn.alpinelinux.org/alpine/v3.6/community' >> /etc/apk/repositories
RUN apk update && \
  apk add --no-cache 'tidyhtml-dev==5.2.0-r1'

# Install the extension
RUN docker-php-ext-install -j$(nproc) tidy

Once I'd built an image using the above, I tried out my code, and the errors disappeared!

This post mainly exists because my google searches were finding nothing. Hopefully, somebody else who runs into the problem will get something useful going forward!

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
Matthew Weierophinney by Matthew Weier O'phinney - 9M ago

For the past thirteen years, I've been either consuming Zend Framework or directly contributing to it. Since 2009, I've operated as project lead, and, since then, shepherded the version 2 and 3 releases, added Apigility to the ZF ecosystem, and helped bring middleware paradigms to the mainstream by assisting with the creation of Stratigility and coordination of the Expressive project. As I write this, the various ZF packages have been downloaded over 300 MILLION times, with 200 million of those being in the past 18 months!

In the last three years, I have performed this work under the umbrella of Rogue Wave Software, who acquired Zend in 2015. However, Rogue Wave has recently made a strategic decision to focus its efforts solely on the Zend Server product of the Zend portfolio. While Rogue Wave will continue to support Zend Framework via the Zend Server service license agreements, it will no longer continue to actively develop it — and this means both myself and Enrico Zimuel will be leaving the company and looking for new opportunities in the not-too-distant future, along with Zeev Suraski and Dmitry Stogov.

We all care deeply about the Zend Framework ecosystem, and we are evaluating options to ensure its continuation and longevity. These include either finding a new corporate sponsor for the project, or forming a foundation. This is where YOU come in: if you work for a company that would be interested in supporting such efforts, I would love to hear from you. Feel free to reach out to me at matthew@weierophinney.net with questions or queries.

The Future of Zend Framework was originally published 17 October 2018 on https://mwop.net by Matthew Weier O'Phinney.
Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
Matthew Weierophinney by Matthew Weier O'phinney - 9M ago

Have you used Node.js?

For those of my readers unfamiliar with Node.js, it's a server-side JavaScript framework that provides the ability to create, among other things, network services. To do so, it provides an event loop, which allows for such things as asynchronous processing.

In the PHP ecosystem, a group of Chinese developers have been creating an extension that provides many of the same capabilities as Node.js. This extension, called Swoole, allows you to create web servers with asynchronous capabilities. In many cases, the asynchronous capabilities are handled via coroutines, allowing you to write normal, synchronous code that still benefits from the asynchronous nature of the system event loop, allowing your server to continue responding to new requests as they come in!

We've been gradually adding and refining our Swoole support in Expressive, and recently issued a stable release that will work with any PSR-15 request handler. In this post, I'll enumerate what I feel are the reasons for considering Swoole when deploying your PHP middleware application.

I feel there are three key advantages to Swoole, and, by extension, any async PHP runtime:

  • Application-specific servers
  • Performance
  • Async processing
Application-specific servers

There are a few general architectures for applications:

  • A single web server sitting in front of many web applications.
  • A single web server sitting in front of a single web application.
  • A load balancer sitting in front of many servers. Some servers might serve the same application, to provide redundancy. (Often, today, these may even be identical docker containers.)

The first scenario is common in internal networks and development, and in many shared hosting scenarios. It's generally considered less secure, however, as a vulnerability in one application can potentially escalate to affect all applications hosted on the server. Additionally, it means that any updates to PHP versions must be tested on all applications, which often means updates are few and far between — which is also problematic from a security standpoint.

When you want to isolate the environment, you'll move to a single web server, single PHP application model:

And when you start scaling, this becomes a load balancer sitting in front of many of these web server/PHP application pairs:

In each of these last two scenarios, there's one thing I want to point out: your application consists of at least two distinct services: the PHP processes, and a web server.

You may have other services as well, such as an RDBMS or document database, cache, search, etc. But generally these are on separate servers and scaled separately. As such, they're outside of this discussion.

In these scenarios, this means each "server" is actually a composite. And when you are adding redundancy to your architecture, this adds significant complexity. It's one more process on each and every node that can fail, and additional configuration you need when deploying.

When we start thinking about microservices, this becomes more problematic. Microservices should be quick and easy to deploy; one service per container is both typical and desired.

What Swoole lets us do is remove one layer of that complexity.

We can have a service per container, and that container can be built with only PHP. We start the Swoole HTTP server, and it's ready to go. We then tell the reverse proxy or load balancer how to route to it, and we're done.

This is useful in each of the scenarios, including the one web server/mulitiple applications scenario, as we can have different PHP runtimes per application. Our "web server" becomes a reverse proxy instead.

Application-specific servers allow us to simplify our deployment, and ship microservices quickly.

Performance

Remember when PHP 7 came out, and it was like doubling the performance of your application?

What if you could do that again?

In our initial benchmarks of Expressive applications, we found that they performed four times better under Swoole than under a traditional nginx+php-fpm pair. More interesting: when benchmarking with a high number of concurrent requests, we also found that Swoole had fewer failed requests. This means you get both better performance and better resilience!

And the hits keep rolling in: when we enabled Swoole's coroutine support and benchmarked endpoints that made use of functionality backed by that coroutine support, we observed up to a ten-fold increase!

The coroutine support covers primarily network I/O operations. As such, operations that hit cache servers, use PDO, or make web requests benefit from it immediately, with no changes to your code.

Swoole makes this possible in a couple of ways. First, because you are firing up a server exactly once, you lose the price of bootstrapping your application that you normally incur on each and every request; your application is bootstrapped from the moment you start accepting requests. Bootstrapping often accounts for the greatest single amount of resource usage in your application.

Second, Swoole runs as an event loop, just like Node.js, allowing it to defer processing of long-running requests in order to respond to new, incoming requests. This leads into my last point.

Async processing

Swoole's event loop provides async functionality to PHP applications. While a number of userland libraries have popped up over the past five years or so that provide async capabilities for PHP, Swoole's is done as a native C extension, and works regardless of the operating system.

When you have an event loop, you can defer processing, which allows the server to respond to additional requests. Commonly, deferment can be explicit:

public function handle(ServerRequestInterface $request) : ResponseInterface
{
    $ts = new DateTimeImmutable();
    \Swoole\Event::defer($this->createCacheDeferment($ts));
    return new EmptyResponse(202);
}

public function createCacheDeferment(DateTimeImmutable $ts) : callable
{
    return function () use ($ts) {
        sleep(5);
        $now = new DateTimeImmutable();
        $item = $this->cache->getItem('ts');
        $item->set(sprintf(
            "Started: %s\nEnded: %s",
            $ts->format('r'),
            $now->format('r')
        ));
        $this->cache->save($item);
    };
}

In this example, we calculate the content to return, defer caching, and return a response immediately. This means your user does not need to wait for you to finish caching content.

Logging is another use case. In the Expressive Swoole bindings, we do access logging after we mark the response complete. This ensures that logging does not impact response times.

Another use case is webhooks. Your application can accept a payload immediately, but finish processing of it after sending the response back to the client.

Swoole also provides async-enabled versions of common filesystem operations, Mysql, Redis, and an HTTP client. In each of these, you provide a callback indicating what should be done once the operation is complete:

use Swoole\Http\Client as HttpClient;

$client = new HttpClient('https://example.com');
$client->setHeaders([
    'Accept' => 'application/json',
    'Authorization' => sprintf('Bearer %s', $token),
]);

// Make the request, telling it what code to execute once
// it is complete:
$client->get('/api/resource', function ($response) {
    // process the response 
});

// This code executes before the request completes:
$counter++;

Code like the above has led to the term "callback hell" when you have many such deferments that depend on each other. So, what do you do if if you want your code to be "non-blocking", but don't want to write callbacks all the time? Well, recent versions of Swoole allow you to enable coroutine support for most I/O operations. What this means is that you can write your code just like you would in a synchronous environment, but whenever code that triggers a coroutine occurs, the server will advance the event loop, allowing it to answer additional requests before the current one completes its work, and then resume execution once it has.

// This spawns a coroutine:
$statement = $pdo->query($sql);

Async functionality may not directly improve the performance of your application, but it will let your application answer more requests, allowing you to handle greater volumes of traffic!

zend-expressive-swoole

We released zendframework/zend-expressive-swoole 1.0.0 two weeks ago. This library acts as a zend-httphandlerrunner RequestHandlerRunner implementation, which means:

  • It can be used with any PSR-15 application.
  • It can be used with any PSR-7 implementation.

In other words, if you want to use Swoole with the upcoming Slim 4 or with equip/dispatch or with northwoods/broker or any of the myriad PSR-15 dispatch systems out there, you can.

The library provides some interesting features for users:

  • Serving of static resources, with HTTP client-side caching headers.
  • Configurable logging.
  • Abiility to restart worker processes.

I've been running applications on versions of it for the past two months, and have noted that it has been stable and reliable. I definitely think it's worth giving it a spin!

Fin

I'm really excited about the possibilities of Swoole and other async systems, as I think they afford us better performance, better reliability, and the ability to defer functionality that doesn't need to complete before we respond to clients. I'd love to hear YOUR experiences, though, particularly in the form of blog posts! Send me a link to a blog post via a comment, or by tweeting at me, and I'll add it to the ZF newsletter.

Updates
  • 2018-10-17: Fixed typo in first sentence.
Async Expressive with Swoole was originally published 16 October 2018 on https://mwop.net by Matthew Weier O'Phinney.
Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
Matthew Weierophinney by Matthew Weier O'phinney - 1y ago

The last week has been my first foray into GraphQL, using the GitHub GraphQL API endpoints. I now have OpinionsTM.

The promise is fantastic: query for everything you need, but nothing more. Get it all in one go.

But the reality is somewhat... different.

What I found was that you end up with a lot of garbage data structures that you then, on the client side, need to decipher and massage, unpacking edges, nodes, and whatnot. I ended up having to do almost a dozen array_column, array_map, and array_reduce operations on the returned data to get a structure I can actually use.

The final data I needed looked like this:

[
  {
    "name": "zendframework/zend-expressive",
    "tags": [
      {
        "name": "3.0.2",
        "date": "2018-04-10"
      }
    ]
  }
]

To fetch it, I needed a query like the following:

query showOrganizationInfo(
  $organization:String!
  $cursor:String!
) {
  organization(login:$organization) {
    repositories(first: 100, after: $cursor) {
      pageInfo {
        startCursor
        hasNextPage
        endCursor
      }
      nodes {
        nameWithOwner
        tags:refs(refPrefix: "refs/tags/", first: 100, orderBy:{field:TAG_COMMIT_DATE, direction:DESC}) {
          edges {
            tag: node {
              name
              target {
                ... on Commit {
                  pushedDate
                }
                ... on Tag {
                  tagger {
                    date
                  }
                }
              }
            }
          }
        }
      }
    }
  }
}

Which gave me data like the following:

{
  "data": {
    "organization": {
      "repositories: {
        "pageInfo": {
          "startCursor": "...",
          "hasNextPage": true,
          "endCursor": "..."
        },
        "nodes": [
          {
            "nameWithOwner": "zendframework/zend-expressive",
            "tags": {
              "edges": [
                "tag": {
                  "name": "3.0.2",
                  "target": {
                    "tagger": {
                      "date": "2018-04-10"
                    }
                  }
                }
              ]
            }
          }
        ]
      }
    }
  }
}

How did I discover how to create the query? I'd like to say it was by reading the docs. I really would. But these gave me almost zero useful examples, particularly when it came to pagination, ordering results sets, or what those various "nodes" and "edges" bits were, or why they were necessary. (I eventually found the information, but it's still rather opaque as an end-user.)

Additionally, see that pageInfo bit? This brings me to my next point: pagination sucks, particularly if it's not at the top-level. You can only fetch 100 items at a time from any given node in the GitHub GraphQL API, which means pagination. And I have yet to find a client that will detect pagination data in results and auto-follow them. Additionally, the "after" property had to be something valid... but there were no examples of what a valid value would be. I had to resort to StackOverflow to find an example, and I still don't understand why it works.

I get why clients cannot unfurl pagination, as pagination data could appear anywhere in the query. However, it hit me hard, as I thought I had a complete set of data, only to discover around half of it was missing once I finally got the processing correct.

If any items further down the tree also require pagination, you're in for some real headaches, as you then have to fetch paginated sets depth-first.

So, while GraphQL promises fewer round trips and exactly the data you need, my experience so far is:

  • I end up having to be very careful about structuring my queries, paying huge attention to pagination potential, and often sending multiple queries ANYWAYS. A well-documented REST API is often far easier to understand and work with immediately.

  • I end up doing MORE work client-side to make the data I receive back USEFUL. This is because the payload structure is based on the query structure and the various permutations you need in order to get at the data you need. Again, a REST API usually has a single, well-documented payload, making consumption far easier.

I'm sure I'm probably mis-using GraphQL, or missing a number of features to make this stuff easier, but so far, I'm left wishing I could just have a number of useful REST endpoints that I can hit consistently in order to aggregate the data I need.

Before anybody suggests it, yes, I am very aware that GitHub also offers a REST API, and the v3 API has endpoints for most of what I needed. However, I had to rely on tags, not releases, as not all of our tags have associated releases. However, the data returned for tags does not include the commit date; for that, you need to fetch the associated commit, and then the date may be under either the author or the committer. This approach would have meant literally thousands of calls to get the data I need, which would have had me hitting rate limits, and potentially taking hours to complete.

My point: perhaps instead of GraphQL, aggregating a bit more data in REST resources (e.g., including commit data with tags), or providing endpoints that allow merging specific resource types could have solved the problem easily. This is where having a developer relations team that finds out what data consumers are needing comes in handy, instead of simply mandating graphql all the things to allow infinite flexibility (and the frustrations of such flexibility, both for the API developer and consumer).

Notes on GraphQL was originally published 18 July 2018 on https://mwop.net by Matthew Weier O'Phinney.
Read Full Article

Read for later

Articles marked as Favorite are saved for later viewing.
close
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Separate tags by commas
To access this feature, please upgrade your account.
Start your free month
Free Preview