Sitepoint PHP

Feliratkozás Sitepoint PHP hírcsatorna csatornájára
Learn CSS | HTML5 | JavaScript | Wordpress | Tutorials-Web Development | Reference | Books and More
Frissítve: 2 perc 41 másodperc

8 Tips for Improving Bootstrap Accessibility

h, 02/12/2018 - 17:00

A few years ago, I wrote about my experiences on developing a Bootstrap version 3 project to be fully accessible for people with disabilities. This focussed mostly on how accessible it is in terms of front-end design. (It didn’t cover accessibility in terms of screen readers, as that’s a whole other story.)

While I could see that the developers behind Bootstrap were making an effort, there were a few areas where this popular UI library fell short. I could also see that there were issues raised on the project that showed they were actively improving — which is fantastic, considering how approximately 3.6% of websites use Bootstrap.

Recently, Bootstrap version 4 was released, so let’s take a look and see if any of the issues I had in the past have improved.

What We’re Looking For with Design Accessibility

There are a few things to consider when designing a website with accessibility in mind. I believe these improve the user experience for everyone and will likely cover a lot of points you would consider anyway.


One way to achieve accessibility is by having a clean, easy-to-use layout that looks good on all devices, as well as looking good at a high zoom level. Up to 200% is a good guide.

Bonus points: having front-end code that matches the layout is also good for users who access the Web with a screen reader or by using a keyboard instead of a mouse.

This allows people to use your website easily irrespective of how they’re viewing it.

Continue reading %8 Tips for Improving Bootstrap Accessibility%

Kategóriák: IT Hírek

PHP-FPM tuning: Using ‘pm static’ for Max Performance

sze, 11/29/2017 - 18:00

Let's take a very quick look at how best to set up PHP-FPM for high throughput, low latency, and a more stable use of CPU and memory. By default, most setups have PHP-FPM’s PM (process manager) string set to dynamic and there’s also the common advice to use ondemand if you suffer from available memory issues. However, let's compare the two management options based on’s documentation and also compare my favorite for high traffic setup --- static pm:

  • pm = dynamic: the number of child processes is set dynamically based on the following directives: pm.max_children, pm.start_servers, pm.min_spare_servers, pm.max_spare_servers.

  • pm = ondemand: the processes spawn on demand when requested, as opposed to dynamic, where pm.start_servers are started when the service is started.

  • pm = static: the number of child processes is fixed by pm.max_children.

See the full list of global php-fpm.conf directives for further details.

PHP-FPM Process Manager (PM) Similarities to CPUFreq Governor

Now, this may seem a bit off topic, but I hope to tie it back into our PHP-FPM tuning topic. Okay, we’ve all had slow CPU issues at some point, whether it be a laptop, VM or dedicated server. Remember CPU frequency scaling? (CPUFreq governor.) These settings, available on both *nix and Windows, can improve the performance and system responsiveness by changing the CPU governor setting from ondemand to performance. This time, let's compare the descriptions and look for similarities:

  • Governor = ondemand: scales CPU frequency dynamically according to current load. Jumps to the highest frequency and then scales down as the idle time increases.

  • Governor = conservative: scales the frequency dynamically according to current load. Scales the frequency more gradually than ondemand.

  • Governor = performance: always run the CPU at the maximum frequency.

See the full list of CPUFreq governor options for further details.

Notice the similarities? I wanted to use this comparison first, with the aim of finding the best way to write an article which recommends using pm static for PHP-FPM as your first choice.

With CPU governor, the performance setting is a pretty safe performance boost because it’s almost entirely dependent on your server CPU’s limit. The only other factors would be things such as heat, battery life (laptop) and other side effects of clocking your CPU frequency to 100% permanently. Once set to performance, it is indeed the fastest setting for your CPU. For example read about the ‘force_turbo’ setting on Raspberry Pi, which forces your RPi board to use the performance governor where performance improvement is more noticeable due to the low CPU clock speeds.

Using ‘pm static’ to Achieve Your Server’s Max Performance

The PHP-FPM pm static setting depends heavily on how much free memory your server has. Basically, if you are suffering from low server memory, then pm ondemand or dynamic may be better options. On the other hand, if you have the memory available, you can avoid much of the PHP process manager (PM) overhead by setting pm static to the max capacity of your server. In other words, when you do the math, pm.static should be set to the max amount of PHP-FPM processes that can run without creating memory availability or cache pressure issues. Also, not so high as to overwhelm CPU(s) and have a pile of pending PHP-FPM operations.

In the screenshot above, this server has pm = static and pm.max_children = 100 which uses a max of around 10GB of the 32GB installed. Take note of the self explanatory highlighted columns. During that screenshot there were about 200 ‘active users’ (past 60 seconds) in Google Analytics. At that level, about 70% of PHP-FPM children are still idle. This means PHP-FPM is always set to the max capacity of your server’s resources regardless of current traffic. Idle processes stay online, waiting for traffic spikes and responding immediately, rather than having to wait on the pm to spawn children and then kill them off after x pm.process_idle_timeout expires. I have pm.max_requests set extremely high because this is a production server with no PHP memory leaks. You can use pm.max_requests = 0 with static if you have 110% confidence in your current and future PHP scripts. However, it’s recommended to restart scripts over time. Set the number of requests to a high number since the point is to avoid pm overhead. So for example at least pm.max_requests = 1000 depending on your number of pm.max_children and number of requests per second.

The screenshot uses Linux top filtered by ‘u’ (user) option and the name of the PHP-FPM user. The number of processes displayed are only the ‘top’ 50 or so (didn’t count), but basically top displays the top stats which fit in your terminal window --- in this case, sorted by %CPU. To view all 100 PHP-FPM processes you can use something like:

top -bn1 | grep php-fpm

Continue reading %PHP-FPM tuning: Using ‘pm static’ for Max Performance%

Kategóriák: IT Hírek

23 Development Tools for Boosting Website Performance

k, 11/28/2017 - 18:00

When dealing with performance, it's hard to remember all the tools that might help you out during development. For that purpose, we've compiled a list of 23 performance tools for your reference. Some you'll have heard of, others probably not. Some have been covered in detail in our performance month, others are yet to be covered future articles; but all are very useful and should be part of your arsenal.

Client-side Performance Tools 1. Test your Mobile Speed with Google

Google’s Test My Site is an online tool offered by Google and powered by the popular website performance tool

You can either visualize your report on site or have it emailed to you via your email address.

The tool gives you your website loading time (or Speed Index) calculated using a Chrome browser on a Moto G4 device within a 3G network. It also gives you the estimated percentage of visitors lost due to loading time. Among other things it also:

  • compares your site speed with the top-performing sites in your industry
  • gives you top fixes that can help you speed up your website loading time.
2. is an open-source tool --- or a set of tools --- that can help you measure your website performance and improve it.

Image source:

  • Coach: gives you performance advice and fixes for your website based on best practices.
  • Browsertime: collects metrics and HAR files from your browser.
  • Chrome-HAR: helps you compare HAR files.
  • PageXray: extracts different metrics (from HAR files) such as size, number of requests, and so on.

You can install these tool(s) using npm:

npm install -g --help

Or Docker:

docker run --shm-size=1g --rm -v "$(pwd)":/ sitespeedio/ --video --speedIndex 3. Lighthouse by Google

Lighthouse is an open-source tool for running audits to improve web page quality. It's integrated into Chrome's DevTools and can be also installed as a Chrome extension or CLI-based tool. It's an indispensable tool for measuring, debugging and improving the performance of modern, client-side apps (particularity PWAs).

You can find the extension from the Chrome Web Store.

Or you can install Lighthouse, from npm, on your system with:

npm install -g lighthouse

Then run it with:

lighthouse <url>

You can use Lighthouse programmatically to build your own performance tool or for continuous integration.

Make sure to check these Lighthouse-based tools:

  • webpack-lighthouse-plugin: a Lighthouse plugin for Webpack
  • treo: Lighthouse as a service with a personal free plan.
  • calibreapp: a paid service, based on Lighthouse, that helps you track, understand and improve performance metrics using real Google Chrome instances.
  • lighthouse-cron: a module which can help you track your Lighthouse scores and metrics overtime.

We've got an in-depth look at Lighthouse in our PWA performance month post.

4. Lightcrawler

You can use Lightcrawler to crawl your website then run each page found through Lighthouse.

Start by installing the tool via npm:

npm install --save-dev lightcrawler

Then run it from the terminal by providing the target URL and a JSON configuration file:

lightcrawler --url <url> --config lightcrawler-config.json

The configuration file can be something like:

{ "extends": "lighthouse:default", "settings": { "crawler": { "maxDepth": 2, "maxChromeInstances": 5 }, "onlyCategories": [ "Performance", ], "onlyAudits": [ "accesskeys", "time-to-interactive", "user-timings" ] } } 5. YSlow

YSlow is a JavaScript bookmarklet that can be added to your browser and invoked on any visited web page. This tool analyzes web pages and helps you discover the reasons for slowness based on Yahoo's rules for high-performance websites.

Image source:

You can install YSlow by dragging and dropping the bookmarklet to your browser’s bookmark bar. Find more information here.

6. GTmetrix

GTmetrix is an online tool that gives you insights into your website performance (fully loaded time, total page size, number of requests etc.) and also practical recommendations on how to optimize it.

7. Page Performance

Page performance is a Chrome extension that can be used to run a quick performance analysis. If you have many tabs open, the extension will be invoked on the active tab.

8. The AMP Project

The AMP (Accelerated Mobile Pages) project is an open-source project that aims to make the web faster. The AMP project enables developers to create websites that are fast, high-performing and with great user experiences across all platforms (desktop browsers and mobile devices).

The AMP project is essentially three core components:

  • AMP HTML: it's HTML but with some restrictions to guarantee reliable performance.
  • AMP JS: a JavaScript library that takes care of rendering AMP HTML.
  • AMP Cache: a content delivery network for caching and delivering valid AMP pages. You can use tools such as AMP Validator or amphtml-validator to check if your pages are valid AMP pages.

Once you add AMP markup to your pages, Google will discover them automatically and cache them to deliver them through the AMP CDN. You can learn from here how to create your first AMP page.

Continue reading %23 Development Tools for Boosting Website Performance%

Kategóriák: IT Hírek

Case Study: Optimizing CommonMark Markdown Parser with

cs, 11/23/2017 - 07:12

As you may know, I am the author and maintainer of the PHP League's CommonMark Markdown parser. This project has three primary goals:

  1. fully support the entire CommonMark spec
  2. match the behavior of the JS reference implementation
  3. be well-written and super-extensible so that others can add their own functionality.

This last goal is perhaps the most challenging, especially from a performance perspective. Other popular Markdown parsers are built using single classes with massive regex functions. As you can see from this benchmark, it makes them lightning fast:

Library Avg. Parse Time File/Class Count Parsedown 1.6.0 2 ms 1 PHP Markdown 1.5.0 4 ms 4 PHP Markdown Extra 1.5.0 7 ms 6 CommonMark 0.12.0 46 ms 117

Unfortunately, because of the tightly-coupled design and overall architecture, it's difficult (if not impossible) to extend these parsers with custom logic.

For the League's CommonMark parser, we chose to prioritize extensibility over performance. This led to a decoupled object-oriented design which users can easily customize. This has enabled others to build their own integrations, extensions, and other custom projects.

The library's performance is still decent --- the end user probably can't differentiate between 42ms and 2ms (you should be caching your rendered Markdown anyway). Nevertheless, we still wanted to optimize our parser as much as possible without compromising our primary goals. This blog post explains how we used Blackfire to do just that.

Profiling with Blackfire

Blackfire is a fantastic tool from the folks at SensioLabs. You simply attach it to any web or CLI request and get this awesome, easy-to-digest performance trace of your application's request. In this post, we'll be examining how Blackfire was used to identify and optimize two performance issues found in version 0.6.1 of the league/commonmark library.

Let's start by profiling the time it takes league/commonmark to parse the contents of the CommonMark spec document:

Later on we'll compare this benchmark to our changes in order to measure the performance improvements.

Quick side-note: Blackfire adds overhead while profiling things, so the execution times will always be much higher than usual. Focus on the relative percentage changes instead of the absolute "wall clock" times.

Optimization 1

Looking at our initial benchmark, you can easily see that inline parsing with InlineParserEngine::parse() accounts for a whopping 43.75% of the execution time. Clicking this method reveals more information about why this happens:

Here we see that InlineParserEngine::parse() is calling Cursor::getCharacter() 79,194 times --- once for every single character in the Markdown text. Here's a partial (slightly-modified) excerpt of this method from 0.6.1:

public function parse(ContextInterface $context, Cursor $cursor) { // Iterate through every single character in the current line while (($character = $cursor->getCharacter()) !== null) { // Check to see whether this character is a special Markdown character // If so, let it try to parse this part of the string foreach ($matchingParsers as $parser) { if ($res = $parser->parse($context, $inlineParserContext)) { continue 2; } } // If no parser could handle this character, then it must be a plain text character // Add this character to the current line of text $lastInline->append($character); } }

Blackfire tells us that parse() is spending over 17% of its time checking every. single. character. one. at. a. time. But most of these 79,194 characters are plain text which don't need special handling! Let's optimize this.

Instead of adding a single character at the end of our loop, let's use a regex to capture as many non-special characters as we can:

public function parse(ContextInterface $context, Cursor $cursor) { // Iterate through every single character in the current line while (($character = $cursor->getCharacter()) !== null) { // Check to see whether this character is a special Markdown character // If so, let it try to parse this part of the string foreach ($matchingParsers as $parser) { if ($res = $parser->parse($context, $inlineParserContext)) { continue 2; } } // If no parser could handle this character, then it must be a plain text character // NEW: Attempt to match multiple non-special characters at once. // We use a dynamically-created regex which matches text from // the current position until it hits a special character. $text = $cursor->match($this->environment->getInlineParserCharacterRegex()); // Add the matching text to the current line of text $lastInline->append($character); } }

Once this change was made, I re-profiled the library using Blackfire:

Okay, things are looking a little better. But let's actually compare the two benchmarks using Blackfire's comparison tool to get a clearer picture of what changed:

This single change resulted in 48,118 fewer calls to that Cursor::getCharacter() method and an 11% overall performance boost! This is certainly helpful, but we can optimize inline parsing even further.

Continue reading %Case Study: Optimizing CommonMark Markdown Parser with

Kategóriák: IT Hírek

How to Optimize Docker-based CI Runners with Shared Package Caches

k, 11/21/2017 - 18:00

At Unleashed Technologies we use Gitlab CI with Docker runners for our continuous integration testing. We've put significant effort into speeding up the build execution speeds. One of the optimizations we made was to share a cache volume across all the CI jobs, allowing them to share files like package download caches.

Continue reading %How to Optimize Docker-based CI Runners with Shared Package Caches%

Kategóriák: IT Hírek

How to Optimize SQL Queries for Faster Sites

h, 11/20/2017 - 18:00

This article was originally published on the Delicious Brains blog, and is republished here with permission.

You know that a fast site == happier users, improved ranking from Google, and increased conversions. Maybe you even think your WordPress site is as fast as it can be: you've looked at site performance, from the best practices of setting up a server, to troubleshooting slow code, and offloading your images to a CDN, but is that everything?

With dynamic, database-driven websites like WordPress, you might still have one problem on your hands: database queries slowing down your site.

In this post, I’ll take you through how to identify the queries causing bottlenecks, how to understand the problems with them, along with quick fixes and other approaches to speed things up. I’ll be using an actual query we recently tackled that was slowing things down on the customer portal of


The first step in fixing slow SQL queries is to find them. Ashley has sung the praises of the debugging plugin Query Monitor on the blog before, and it’s the database queries feature of the plugin that really makes it an invaluable tool for identifying slow SQL queries. The plugin reports on all the database queries executed during the page request. It allows you to filter them by the code or component (the plugin, theme or WordPress core) calling them, and highlights duplicate and slow queries:

If you don’t want to install a debugging plugin on a production site (maybe you’re worried about adding some performance overhead) you can opt to turn on the MySQL Slow Query Log, which logs all queries that take a certain amount of time to execute. This is relatively simple to configure and set up where to log the queries to. As this is a server-level tweak, the performance hit will be less that a debugging plugin on the site, but should be turned off when not using it.


Once you have found an expensive query that you want to improve, the next step is to try to understand what is making the query slow. Recently during development to our site, we found a query that was taking around 8 seconds to execute!

SELECT l.key_id, l.order_id, l.activation_email, l.licence_key, l.software_product_id, l.software_version, l.activations_limit, l.created, l.renewal_type, l.renewal_id, l.exempt_domain, s.next_payment_date, s.status, pm2.post_id AS 'product_id', pm.meta_value AS 'user_id' FROM oiz6q8a_woocommerce_software_licences l INNER JOIN oiz6q8a_woocommerce_software_subscriptions s ON s.key_id = l.key_id INNER JOIN oiz6q8a_posts p ON p.ID = l.order_id INNER JOIN oiz6q8a_postmeta pm ON pm.post_id = p.ID AND pm.meta_key = '_customer_user' INNER JOIN oiz6q8a_postmeta pm2 ON pm2.meta_key = '_software_product_id' AND pm2.meta_value = l.software_product_id WHERE p.post_type = 'shop_order' AND pm.meta_value = 279 ORDER BY s.next_payment_date

We use WooCommerce and a customized version of the WooCommerce Software Subscriptions plugin to run our plugins store. The purpose of this query is to get all subscriptions for a customer where we know their customer number. WooCommerce has a somewhat complex data model, in that even though an order is stored as a custom post type, the id of the customer (for stores where each customer gets a WordPress user created for them) is not stored as the post_author, but instead as a piece of post meta data. There are also a couple of joins to custom tables created by the software subscriptions plugin. Let’s dive in to understand the query more.

MySQL is your Friend

MySQL has a handy statement DESCRIBE which can be used to output information about a table’s structure such as its columns, data types, defaults. So if you execute DESCRIBE wp_postmeta; you will see the following results:

Field Type Null Key Default Extra meta_id bigint(20) unsigned NO PRI NULL auto_increment post_id bigint(20) unsigned NO MUL 0 meta_key varchar(255) YES MUL NULL meta_value longtext YES NULL

That’s cool, but you may already know about it. But did you know that the DESCRIBE statement prefix can actually be used on SELECT, INSERT, UPDATE, REPLACE and DELETE statements? This is more commonly known by its synonym EXPLAIN and will give us detailed information about how the statement will be executed.

Here are the results for our slow query:

id select_type table type possible_keys key key_len ref rows Extra 1 SIMPLE pm2 ref meta_key meta_key 576 const 28 Using where; Using temporary; Using filesort 1 SIMPLE pm ref post_id,meta_key meta_key 576 const 37456 Using where 1 SIMPLE p eq_ref PRIMARY,type_status_date PRIMARY 8 1 Using where 1 SIMPLE l ref PRIMARY,order_id order_id 8 1 Using index condition; Using where 1 SIMPLE s eq_ref PRIMARY PRIMARY 8 deliciousbrainsdev.l.key_id 1 NULL

At first glance, this isn’t very easy to interpret. Luckily the folks over at SitePoint have put together a comprehensive guide to understanding the statement.

The most important column is type, which describes how the tables are joined. If you see ALL then that means MySQL is reading the whole table from disk, increasing I/O rates and putting load on the CPU. This is know as a “full table scan” (more on that later).

The rows column is also a good indication of what MySQL is having to do, as this shows how many rows it has looked in to find a result.

Explain also gives us more information we can use to optimize. For example, the pm2 table (wp_postmeta), it is telling us we are Using filesort, because we are asking the results to be sorted using an ORDER BY clause on the statement. If we were also grouping the query we would be adding overhead to the execution.

Visual Investigation

MySQL Workbench is another handy, free tool for this type of investigation. For databases running on MySQL 5.6 and above, the results of EXPLAIN can be outputted as JSON, and MySQL Workbench turns that JSON into a visual execution plan of the statement:

It automatically draws your attention to issues by coloring parts of the query by cost. We can see straight away that join to the wp_woocommerce_software_licences (alias l) table has a serious issue.

Continue reading %How to Optimize SQL Queries for Faster Sites%

Kategóriák: IT Hírek

How to Read Big Files with PHP (Without Killing Your Server)

cs, 11/16/2017 - 19:00

It’s not often that we, as PHP developers, need to worry about memory management. The PHP engine does a stellar job of cleaning up after us, and the web server model of short-lived execution contexts means even the sloppiest code has no long-lasting effects.

There are rare times when we may need to step outside of this comfortable boundary --- like when we're trying to run Composer for a large project on the smallest VPS we can create, or when we need to read large files on an equally small server.

It’s the latter problem we'll look at in this tutorial.

The code for this tutorial can be found on GitHub.

Measuring Success

The only way to be sure we’re making any improvement to our code is to measure a bad situation and then compare that measurement to another after we’ve applied our fix. In other words, unless we know how much a “solution” helps us (if at all), we can’t know if it really is a solution or not.

There are two metrics we can care about. The first is CPU usage. How fast or slow is the process we want to work on? The second is memory usage. How much memory does the script take to execute? These are often inversely proportional --- meaning that we can offload memory usage at the cost of CPU usage, and vice versa.

In an asynchronous execution model (like with multi-process or multi-threaded PHP applications), both CPU and memory usage are important considerations. In traditional PHP architecture, these generally become a problem when either one reaches the limits of the server.

It's impractical to measure CPU usage inside PHP. If that’s the area you want to focus on, consider using something like top, on Ubuntu or macOS. For Windows, consider using the Linux Subsystem, so you can use top in Ubuntu.

For the purposes of this tutorial, we’re going to measure memory usage. We’ll look at how much memory is used in “traditional” scripts. We’ll implement a couple of optimization strategies and measure those too. In the end, I want you to be able to make an educated choice.

The methods we’ll use to see how much memory is used are:

// formatBytes is taken from the documentation memory_get_peak_usage(); function formatBytes($bytes, $precision = 2) { $units = array("b", "kb", "mb", "gb", "tb"); $bytes = max($bytes, 0); $pow = floor(($bytes ? log($bytes) : 0) / log(1024)); $pow = min($pow, count($units) - 1); $bytes /= (1 << (10 * $pow)); return round($bytes, $precision) . " " . $units[$pow]; }

We’ll use these functions at the end of our scripts, so we can see which script uses the most memory at one time.

What Are Our Options?

There are many approaches we could take to read files efficiently. But there are also two likely scenarios in which we could use them. We could want to read and process data all at the same time, outputting the processed data or performing other actions based on what we read. We could also want to transform a stream of data without ever really needing access to the data.

Let’s imagine, for the first scenario, that we want to be able to read a file and create separate queued processing jobs every 10,000 lines. We’d need to keep at least 10,000 lines in memory, and pass them along to the queued job manager (whatever form that may take).

For the second scenario, let’s imagine we want to compress the contents of a particularly large API response. We don’t care what it says, but we need to make sure it’s backed up in a compressed form.

In both scenarios, we need to read large files. In the first, we need to know what the data is. In the second, we don’t care what the data is. Let’s explore these options…

Reading Files, Line By Line

There are many functions for working with files. Let’s combine a few into a naive file reader:

// from memory.php function formatBytes($bytes, $precision = 2) { $units = array("b", "kb", "mb", "gb", "tb"); $bytes = max($bytes, 0); $pow = floor(($bytes ? log($bytes) : 0) / log(1024)); $pow = min($pow, count($units) - 1); $bytes /= (1 << (10 * $pow)); return round($bytes, $precision) . " " . $units[$pow]; } print formatBytes(memory_get_peak_usage()); // from reading-files-line-by-line-1.php function readTheFile($path) { $lines = []; $handle = fopen($path, "r"); while(!feof($handle)) { $lines[] = trim(fgets($handle)); } fclose($handle); return $lines; } readTheFile("shakespeare.txt"); require "memory.php";

We’re reading a text file containing the complete works of Shakespeare. The text file is about 5.5MB, and the peak memory usage is 12.8MB. Now, let’s use a generator to read each line:

// from reading-files-line-by-line-2.php function readTheFile($path) { $handle = fopen($path, "r"); while(!feof($handle)) { yield trim(fgets($handle)); } fclose($handle); } readTheFile("shakespeare.txt"); require "memory.php";

The text file is the same size, but the peak memory usage is 393KB. This doesn’t mean anything until we do something with the data we’re reading. Perhaps we can split the document into chunks whenever we see two blank lines. Something like this:

// from reading-files-line-by-line-3.php $iterator = readTheFile("shakespeare.txt"); $buffer = ""; foreach ($iterator as $iteration) { preg_match("/\n{3}/", $buffer, $matches); if (count($matches)) { print "."; $buffer = ""; } else { $buffer .= $iteration . PHP_EOL; } } require "memory.php";

Any guesses how much memory we’re using now? Would it surprise you to know that, even though we split the text document up into 1,216 chunks, we still only use 459KB of memory? Given the nature of generators, the most memory we’ll use is that which we need to store the largest text chunk in an iteration. In this case, the largest chunk is 101,985 characters.

I’ve already written about the performance boosts of using generators and Nikita Popov’s Iterator library, so go check that out if you’d like to see more!

Generators have other uses, but this one is demonstrably good for performant reading of large files. If we need to work on the data, generators are probably the best way.

Piping Between Files

In situations where we don’t need to operate on the data, we can pass file data from one file to another. This is commonly called piping (presumably because we don’t see what’s inside a pipe except at each end … as long as it's opaque, of course!). We can achieve this by using stream methods. Let’s first write a script to transfer from one file to another, so that we can measure the memory usage:

// from piping-files-1.php file_put_contents( "piping-files-1.txt", file_get_contents("shakespeare.txt") ); require "memory.php";

Unsurprisingly, this script uses slightly more memory to run than the text file it copies. That’s because it has to read (and keep) the file contents in memory until it has written to the new file. For small files, that may be okay. When we start to use bigger files, no so much…

Let’s try streaming (or piping) from one file to another:

// from piping-files-2.php $handle1 = fopen("shakespeare.txt", "r"); $handle2 = fopen("piping-files-2.txt", "w"); stream_copy_to_stream($handle1, $handle2); fclose($handle1); fclose($handle2); require "memory.php";

This code is slightly strange. We open handles to both files, the first in read mode and the second in write mode. Then we copy from the first into the second. We finish by closing both files again. It may surprise you to know that the memory used is 393KB.

That seems familiar. Isn’t that what the generator code used to store when reading each line? That’s because the second argument to fgets specifies how many bytes of each line to read (and defaults to -1 or until it reaches a new line).

The third argument to stream_copy_to_stream is exactly the same sort of parameter (with exactly the same default). stream_copy_to_stream is reading from one stream, one line at a time, and writing it to the other stream. It skips the part where the generator yields a value, since we don’t need to work with that value.

Piping this text isn’t useful to us, so let’s think of other examples which might be. Suppose we wanted to output an image from our CDN, as a sort of redirected application route. We could illustrate it with code resembling the following:

// from piping-files-3.php file_put_contents( "piping-files-3.jpeg", file_get_contents( "" ) ); // ...or write this straight to stdout, if we don't need the memory info require "memory.php";

Imagine an application route brought us to this code. But instead of serving up a file from the local file system, we want to get it from a CDN. We may substitute file_get_contents for something more elegant (like Guzzle), but under the hood it’s much the same.

The memory usage (for this image) is around 581KB. Now, how about we try to stream this instead?

// from piping-files-4.php $handle1 = fopen( "", "r" ); $handle2 = fopen( "piping-files-4.jpeg", "w" ); // ...or write this straight to stdout, if we don't need the memory info stream_copy_to_stream($handle1, $handle2); fclose($handle1); fclose($handle2); require "memory.php";

The memory usage is slightly less (at 400KB), but the result is the same. If we didn’t need the memory information, we could just as well print to standard output. In fact, PHP provides a simple way to do this:

$handle1 = fopen( "", "r" ); $handle2 = fopen( "php://stdout", "w" ); stream_copy_to_stream($handle1, $handle2); fclose($handle1); fclose($handle2); // require "memory.php"; Other Streams

There are a few other streams we could pipe and/or write to and/or read from:

  • php://stdin (read-only)
  • php://stderr (write-only, like php://stdout)
  • php://input (read-only) which gives us access to the raw request body
  • php://output (write-only) which lets us write to an output buffer
  • php://memory and php://temp (read-write) are places we can store data temporarily. The difference is that php://temp will store the data in the file system once it becomes large enough, while php://memory will keep storing in memory until that runs out.

Continue reading %How to Read Big Files with PHP (Without Killing Your Server)%

Kategóriák: IT Hírek

Your First PHP Code

k, 10/31/2017 - 18:00

The following is a short extract from our new book, PHP & MySQL: Novice to Ninja, 6th Edition, written by Tom Butler and Kevin Yank. It's the ultimate beginner's guide to PHP. SitePoint Premium members get access with their membership, or you can buy a copy in stores worldwide.

Now that you have your virtual server up and running, it’s time to write your first PHP script. PHP is a server-side language. This concept may be a little difficult to grasp, especially if you’ve only ever designed websites using client-side languages like HTML, CSS, and JavaScript.

A server-side language is similar to JavaScript in that it allows you to embed little programs (scripts) into the HTML code of a web page. When executed, these programs give you greater control over what appears in the browser window than HTML alone can provide. The key difference between JavaScript and PHP is the stage of loading the web page at which these embedded programs are executed.

Client-side languages like JavaScript are read and executed by the web browser after downloading the web page (embedded programs and all) from the web server. In contrast, server-side languages like PHP are run by the web server, before sending the web page to the browser. Whereas client-side languages give you control over how a page behaves once it’s displayed by the browser, server-side languages let you generate customized pages on the fly before they’re even sent to the browser.

Once the web server has executed the PHP code embedded in a web page, the result takes the place of the PHP code in the page. All the browser sees is standard HTML code when it receives the page, hence the name “server-side language.” Let’s look at simple example of some PHP that generates a random number between 1 and 10 and then displays it on the screen:

<!DOCTYPE html> <html lang="en"> <head> <meta charset="utf-8"> <title>Random Number</title> </head> <body> <p>Generating a random number between 1 and 10: <?php echo rand(1, 10); ?> </p> </body> </html>

Most of this is plain HTML. Only the line between <?php and ?> is PHP code. <?php marks the start of an embedded PHP script and ?> marks its end. The web server is asked to interpret everything between these two delimiters and convert it to regular HTML code before it sends the web page to the requesting browser. If you right-click inside your browser and choose View Source (the text may be different depending on the browser you’re using) you can see that the browser is presented with the following:

<!DOCTYPE html> <html lang="en"> <head> <meta charset="utf-8"> <title>Random Number</title> </head> <body> <p>Generating a random number between 1 and 10: 5 </p> </body> </html>

Notice that all signs of the PHP code have disappeared. In its place the output of the script has appeared, and it looks just like standard HTML. This example demonstrates several advantages of server-side scripting …

  • No browser compatibility issues. PHP scripts are interpreted by the web server alone, so there’s no need to worry about whether the language features you’re using are supported by the visitor’s browser.
  • Access to server-side resources. In the example above, we placed a random number generated by the web server into the web page. If we had inserted the number using JavaScript, the number would be generated in the browser and someone could potentially amend the code to insert a specific number. Granted, there are more impressive examples of the exploitation of server-side resources, such as inserting content pulled out of a MySQL database.
  • Reduced load on the client. JavaScript can delay the display of a web page significantly (especially on mobile devices!) as the browser must run the script before it can display the web page. With server-side code, this burden is passed to the web server, which you can make as beefy as your application requires (and your wallet can afford).
  • Choice. When writing code that’s run in the browser, the browser has to understand how to run the code given to it. All modern browsers understand HTML, CSS and JavaScript. To write some code that’s run in the browser, you must use one of these languages. By running code on the server that generates HTML, you have a choice of many languages—one of which is PHP.
Basic Syntax and Statements

PHP syntax will be very familiar to anyone with an understanding of JavaScript, C, C++, C#, Objective-C, Java, Perl, or any other C-derived language. But if these languages are unfamiliar to you, or if you’re new to programming in general, there’s no need to worry about it.

Continue reading %Your First PHP Code%

Kategóriák: IT Hírek

How to Optimize MySQL: Indexes, Slow Queries, Configuration

h, 10/30/2017 - 17:00

MySQL is still the world's most popular relational database, and yet, it's still the most unoptimized - many people leave it at default values, not bothering to investigate further. In this article, we'll look at some MySQL optimization tips we've covered previously, and combine them with novelties that came out since.

Configuration Optimization

The first - and most skipped! - performance upgrade every user of MySQL should do is tweak the configuration. 5.7 (the current version) has much better defaults than its predecessors, but it's still easy to make improvements on top of those.

We'll assume you're using a Linux-based host or a good Vagrant box like our Homestead Improved so your configuration file will be in /etc/mysql/my.cnf. It's possible that your installation will actually load a secondary configuration file into that configuration file, so look into that - if the my.cnf file doesn't have much content, the file /etc/mysql/mysql.conf.d/mysqld.cnf might.

Editing Configuration

You'll need to be comfortable with using the command line. Even if you haven't been exposed to it yet, now is as good a time as any.

If you're editing locally on a Vagrant box, you can copy the file out into the main filesystem by copying it into the shared folder with cp /etc/mysql/my.cnf /home/vagrant/Code and editing it with a regular text editor, then copying it back into place when done. Otherwise, use a simple text editor like vim by executing sudo vim /etc/mysql/my.cnf.

Note: modify the above path to match the config file's real location - it's possible that it's actually in /etc/mysql/mysql.conf.d/mysqld.cnf

Manual Tweaks

The following manual tweaks should be made out of the box. As per these tips, add this to the config file under the [mysqld] section:

innodb_buffer_pool_size = 1G # (adjust value here, 50%-70% of total RAM) innodb_log_file_size = 256M innodb_flush_log_at_trx_commit = 1 # may change to 2 or 0 innodb_flush_method = O_DIRECT
  • innodb_buffer_pool_size - the buffer pool is a storage area for caching data and indexes in memory. It's used to keep frequently accessed data in memory, and when you're running a dedicated or virtual server where the DB will often be the bottleneck, it makes sense to give this part of your app(s) the most RAM. Hence, we give it 50-70% of all RAM. There's a buffer pool sizing guide available in the MySQL docs.
  • the log file size is well explained here but in a nutshell it's how much data to store in a log before wiping it. Note that a log in this case is not an error log or something you might be used to, but instead it indicates checkpoint time because with MySQL, writes happen in the background but still affect foreground performance. Big log files mean better performance because of fewer new and smaller checkpoints being created, but longer recovery time in case of a crash (more stuff needs to be re-written to the DB).
  • innodb_flush_log_at_trx_commit is explained here and indicates what happens with the log file. With 1 we have the safest setting, because the log is flushed to disk after every transaction. With 0 or 2 it's less ACID, but more performant. The difference in this case isn't big enough to outweigh the stability benefits of the setting of 1.
  • innodb_flush_method - to top things off in regards to flushing, this gets set to O_DIRECT to avoid double-buffering. This should always be done, unless the I/O system is very low performance. On most hosted servers like DigitalOcean droplets you'll have SSDs, so the I/O system will be high performance.

There's another tool from Percona which can help us find the remaining problems automatically. Note that if we had run it without the above manual tweaks, only 1 out of 4 fixes would have been manually identified because the other 3 depend on user preference and the app's environment.

Continue reading %How to Optimize MySQL: Indexes, Slow Queries, Configuration%

Kategóriák: IT Hírek

Git and WordPress: How to Auto-Update Posts with Pull Requests

sze, 10/18/2017 - 23:54

At, we also use WordPress for now, and use the same peer review approach for content as we do at SitePoint.

We decided to build a tool which automatically pulls content from merged pull requests into articles, giving us the ability to fix typos and update posts from Github, and see the changes reflected on the live site. This tutorial will walk you through the creation of this tool, so you can start using it for your own WordPress site, or build your own version.

The Plan

The first part is identifying the problem and the situation surrounding it.

  • we use WPGlobus for multi-language support, which means content gets saved like this: {:en}English content{:}{:hr}Croatian content{:}.
  • authors submit PRs via Github, the PRs are peer reviewed and merged, and then (currently) manually imported into WP's Posts UI through the browser.
  • every post has the same folder layout: author_folder/post_folder/language/
  • this is slow and error prone, and sometimes mistakes slip by. It also makes updating posts tedious.

The solution is the following:

  • add a hook processor which will detect pushes to the master branch (i.e. merges from PRs)
  • the processor should look for a meta file in the commit which would contain information on where to save the updated content
  • the processor automatically converts the MD content to HTML, merges the languages in the WPGlobus format, and saves them into the database

If you'd like to follow along (highly recommended), please boot up a good virtual machine environment, install the newest version of WordPress on it, and add the WPGlobus plugin. Alternatively, you can use a prepared WordPress box like VVV. Additionally, make sure your environment has ngrok installed - we'll use that to pipe Github hook triggers to our local machine, so we can test locally instead of having to deploy.


For this experiment, let's create a new repository. I'll call mine autopush.

In the settings of this repository, we need to add a new hook. Since we're talking about a temporary Ngrok URL, let's first spin that up. In my case, entering the following on the host machine does the trick:

ngrok http

I was given the link, so that's what goes into the webhook, with an arbitrary suffix like githook. We only need push events. The json data type is cleaner, so that's selected as a preference, and the final webhook setup looks something like this:

Let's test this now.

git clone cd autopush touch echo "This is a README file" >> git add -A git commit -am "We're pushing for the first time" git push origin master

The ngrok log screen should display something like this:

POST /githook/ 404 Not Found

This is fine. We haven't made the /githook endpoint yet.

Processing Webhooks

We'll read this new data into WordPress with custom logic. Due to the spaghetti-code nature of WP itself, it's easier to circumvent it entirely with a small custom application. First, we'll create the githook folder in the WordPress project's root, and an index.php file inside it. This makes the /githook/ path accessible, and the hook will no longer return 404, but 200 OK.

According to the docs, the payload will have a commits field with a modified field in each commit. Since we're only looking to update posts, not schedule them or delete them - those steps are still manual, for safety - we'll only be paying attention to that one. Let's see if we can catch it on a test push.

First, we'll save our request data to a text file, for debugging purposes. We can do this by modifying our githook/index.php file:

<?php file_put_contents('test.txt', file_get_contents('php://input'));

Then we'll create a new branch, add a file, and push it online.

git checkout -b test-branch touch git add git commit -am "Added test file" git push origin test-branch

Sure enough, our test.json file is filled with the payload now. This is the payload I got. You can see that we have only one commit, and that commit's modified field is empty, while the added field has We can also see this happened on refs/heads/test-branch, ergo, we're not interested in it. But what happens if we make a PR out of this branch and merge it?

Continue reading %Git and WordPress: How to Auto-Update Posts with Pull Requests%

Kategóriák: IT Hírek

Symfony Flex: Paving the Path to a Faster, Better Symfony

v, 10/15/2017 - 17:26

Symfony Flex is a modern replacement for the Symfony Installer, and not the name of the next Symfony version. As the intro text says:

Internally, Symfony Flex is a Composer plugin that modifies the behavior of the require and update commands. When installing or updating dependencies in a Flex-enabled application, Symfony can perform tasks before and after the execution of Composer tasks.

The new Symfony will be called just Symfony 4, and while this tutorial will deal only with the Flex tool, it will mention some Symfony 4 upgrades as well.

Still Under Development

Symfony Flex can be considered a Composer wrapper, in that it provides your Symfony project with additional options during installation and configuration. It was developed with simplicity in mind, and obviously heavily influenced by the user-friendliness of Laravel. Remember, Laravel got to its current level of popularity due to its ease of use and the low entry barrier it provides newcomers with, and Symfony wanted to emulate this.

It should be noted that both Flex and Symfony 4 are still under development, slated for release somewhere at the end of November this year (2017). As such, some of the features mentioned in this post may have changed by the time you read it, but we'll do our best to keep it up to date.

Most notably, the use of a makefile and the make tool to build a project if Symfony/Console is unavailable is still up in the air, as it seems to not be working properly on some operating systems. Fabien recently held a survey around this, asking for the community's suggestions to a replacement, and overwhelmingly, the community voted in favor of just making Symfony/Console required.

What's Different?

Most notably, Flex respects the coming Symfony 4 updates which boil down to the following major changes:

  • PHP 7+ is required
  • all folders are optional. If your project isn't using one, it doesn't have to be there. This makes the directory tree much simpler and more readable. Additionally, often useless files like .htaccess, LICENSE, and README have been removed as well - a project which needs those can easily add them.
  • there is no more web folder. Instead, there is the public folder, like in all other major frameworks. This consolidates user experience across ecosystems.
  • temporary files go under /var in the root of the project folder, with the /var/cache subfolder reserved for long term cache, like merged class files for deploying apps as read-only artifacts
  • source code goes under /src. No /app.
  • configuration goes into /config.
  • templates go into /templates.
  • Flex will have its own Symfony-verified list of packages that are referenced by one and one alias alone. So executing composer require cli will actually trigger Flex, which will look in its list of packages, find the one tagged as cli (in this case, Symfony Console), and install it. These "official" packages are called recipes, and can be found here. To accept user-submitted recipes, a flag exists in Flex's configuration which needs to be set to true: composer config extra.symfony.allow-contrib true. Those recipes can be found here. By officially endorsing some packages, Symfony is in many ways becoming as opinionated as Laravel. While this is bad in some ways, it's very good in many more ways: a consolidated, opinionated way to build Symfony apps used by most people so that everyone is on the same page.
  • bundle fragments no longer need to be custom-activated and added into a ton of files. Flex automates this, as well as their removal.
  • instead of parameters in config files, Symfony 4 will be using environment variables like Laravel

As usual, we'll assume you're already running a healthy VM environment like Homestead Improved so you can follow along.

Okay, let's get our hands dirty with an example app. All Symfony apps can now be started from the bare bones super-minimal Symfony Skeleton app:

composer create-project symfony/skeleton flexy

Notice the created directory structure.

In /public, we no longer have app.php and app_dev.php, only the index.php file. The type of the environment (test / dev / prod) is now dictated with environment variables, and reads the configuration from the /config folder.

Notice how the end of the installation process mentions that make cache-warmup was called, and that you can run make serve. This is where the new Symfony uses the "controversial" Makefile approach mentioned above. This might change.

Out of the box, opening this skeleton in the browser will throw an error because no routes have been defined yet. Let's fix this.

index: path: / defaults: { _controller: 'App\Controller\DefaultController::index' }


We'll need to create this controller and its index action:

<?php namespace App\Controller; use Symfony\Component\HttpFoundation\Response; class DefaultController { public function index() { return new Response('Hello'); } }

This will produce a simple Hello screen, like so:

Execution Permissions

If you try to install a binary like the Symfony/Console with composer req cli, you might run into the following problem:

~ bin/console -bash: bin/console: Permission denied

This is a known hiccup when using virtual machines, and can be easily fixed by either:

  • running the console with php bin/console instead of running it directly, or
  • adding the "execute" permission to the file on the host machine (not from within the virtual machine), by executing: chmod +x bin/console. This will allow the direct execution of bin/console from within the VM then.

Continue reading %Symfony Flex: Paving the Path to a Faster, Better Symfony%

Kategóriák: IT Hírek

Let’s Compare: RunCloud vs Forge vs ServerPilot

v, 09/24/2017 - 15:20

When your website or web application has outgrown your shared hosting account, it is time to move it to the cloud, or into a virtual private server (VPS). Nowadays, VPS providers offer better value in terms of CPU and RAM resources than the shared hosting counterparts. However, I can recall when I wanted to move my website to Amazon Web Services a few years ago, and it took me days to deploy the server and get it ready with PHP, MySQL and Apache.

Fortunately, that is not the case today as we have the help of some cool tools and third party services.

I took a look at three similar services - RunCloud, ServerPilot and Laravel Forge - and compared them to see which performs best. These are SaaS tools that can help deploy, configure, manage, and monitor VPS on any cloud hosting providers like AWS, Linode, Vultr, Digital Ocean, and others.

Registration was easy. You don't need a credit card and you can even use your Facebook, Google, or GitHub account. If you don't fancy using your social media account, then just use your email address.

With RunCloud, what you need to do is deploy a fresh Ubuntu 16.04 LTS VPS on almost any VPS hosting provider. Throughout this tutorial I used a Linode VPS which costs $5 per month.

When your server is ready, you go back to RunCloud and enter your server details like the IP address to get your connection command.

You will get a very long one-line command like this.

All that you need to do is run the command in your server as the root user and let it run. It will probably take about 15 to 20 minutes for the process to complete, and once you are done, you can start to manage your server from within the web panel.

Managing Your Server

Next you need to create a Web Application. That is like a virtual host in the web server. When you do that, you can select your PHP version, and you can assign your domain name to the Web Application. You can assign more than one.

The UI is very user friendly and intuitive and you can find whatever you need rather quickly and easily. My most basic need would be to be able to create virtual hosts (Web Application), assign domain names and create databases and perhaps deploy my application from GitHub or any other Git server. There is also a script installer that supports WordPress, Joomla, Concrete5, PHPMyAdmin and a few more common PHP tools.

If you do PHP development and use Git, RunCloud supports deploying your script from GitHub, Bitbucket, Gitlab and also your own custom Git server installation.

We'll look into performance later.

ServerPilot is probably the oldest among the three, and most well-marketed. When I ask around if anybody is using a server management tool, it will probably be has a Free account that you can use with limited features, and there are also the Coach ($10/mo) and Business ($49/mo) plans. Please bear in mind, these prices are for server deployments and not per account to ServerPilot.

Coach has more features like free SSL via LetsEncrypt, server health stats and other. The Business package is more like a mini NewRelic or an application performance monitoring service. You can see slow script request, application resources stats and more.

Of course the Free package is good enough, but limited compared to their Coach and Business options.

Connecting A Server

This feels similar to RunCloud. You also need a fresh Ubuntu 16.04 VPS server to start. If you have the root password to your server, you don't need to copy and paste the installer command via SSH. You can enter your server IP address and your root password, and ServerPilot will get your server ready in a few minutes.

However, if you don't want to enter the root password of your server into ServerPilot, then you can opt not to do that. You can get install run command, and copy and paste that via SSH into your server.

Honestly, server deployment process feels pretty smooth and fast. I was amazed with that. Once done, I was redirected to create an App right away. Here, an App is like the virtual host. Once that was done, there is an option to install WordPress. I thought I'd give that a try. You can also select your PHP version if you want. I selected PHP 7.1.

Managing Your Server

From registration to deploying a server and getting it up and running, the experience was really good and fast. You get a server running in no time at all.

When WordPress is an install option, I was hoping to see other PHP tools like Joomla and Concrete5 to be available as well. Unfortunately, they are not present, and also does not give the option to set where your web root should be other than /public.

Another thing that I was hoping to see is a way to deploy my application from GitHub. I can't find that. And I found out from the documentation that if I want to deploy my script via Git, I would still have to do it via SSH.

This is also the case with scheduling or crontab. If you familiar with CPanel, you would hope to have scheduling built into the web panel of ServerPilot. Unfortunately, that is not the case. You can still manage your own crontab via SSH though.

Laravel Forge

Unfortunately if you just want to try this out, there is no free-level account. Price starts at $15/mo for the Growth account, and $39/mo for the Business account. The good thing is, that is for unlimited number of server deployments. However, there is a 5-day free trial for whatever account you choose to get. And if you subscribe for an annual subscription, you get a $40 discount off the Growth account, and $69 off the Business account.

The difference between these two is mainly about adding team members into your account to help you with your server administrations. The Growth account is mostly for personal users while the Business account allows you to add team members to help you out.

Connecting a Server

Unlike ServerPilot and RunCloud, Laravel Forge will connect to your cloud hosting provider via API and deploy servers there on your behalf. With this, you don't have to deploy your server in the cloud hosting provider first, and later enter details into Laravel Forge. Laravel Forge will create them for you. For now, Laravel Forge supports Digital Ocean, AWS and Linode.

For any cloud hosting providers that are not currently supported, you can still add those into Laravel Forge under the Custom VPS option. There it works pretty much like Server Pilot and RunCloud. You will need a fresh Ubuntu 16.04 64-bit VPS, and later initiate the connect procedure in Laravel Forge. Just like RunCloud, you will need to enter the install script command via SSH.

Managing Your Server

Laravel Forge has a lot covered when it comes to managing your server. When your server is ready, you create a Site (virtual host). Since Laravel Forge is created by a PHP developer, Git is actually the main method for how you deploy your files. Or, you can get started by installing WordPress.

When creating your site, you are presented with the option to add your domain name. And you can set whether to have the wildcard sub-domain option enabled. However, there is no option for multiple domain names for a single site.

There is also Scheduling, MySQL database management and others. You can also integrate your server with for application performance monitoring and Papertrail ( for log analysis.

Continue reading %Let’s Compare: RunCloud vs Forge vs ServerPilot%

Kategóriák: IT Hírek

Symfony Console Beyond the Basics – Helpers and Other Tools

cs, 09/14/2017 - 08:00

It's undeniable how useful console commands can be when developing software. Not too long ago we re-introduced the Symfony Console component.

This component allows us to create structured and testable CLI commands. We created some simple commands and tested them; but when our commands become bigger and more complex, we need a different set of tools.

This is what we are going to look at today: advanced Symfony console tools.

Let's create a command that we can use to show some of these features. Most of the basic functionality was shown in the re-introduction to the Symfony console article, so be sure to check it before advancing - it's a quick but useful read!

Installation composer require symfony/console

Essential information about Composer can be found here, and if you're not familiar with well designed isolated PHP environments in which to develop your PHP apps like Vagrant, we have a fantastic book explaining it all in depth available for purchase here.

Creating our command

Let's create a command for an all time favorite: Fizzbuzz.

Fizzbuzz is a simple problem often used in programming interviews to assert the programming competence of the interviewee. The definition of Fizzbuzz normally comes in the following form:

Write a program that prints the numbers from 1 to x. But for multiples of three print "Fizz" instead of the number and for the multiples of five print "Buzz". For numbers which are multiples of both three and five, print "FizzBuzz".

Our command will receive an argument which will be the top limit for Fizzbuzz.

First of all, let's create our Fizzbuzz class.

<?php declare(strict_types=1); namespace FizzBuzz; class Fizzbuzz{ public function isFizz(int $value): bool{ if($value % 3 === 0){ return true; } return false; } public function isBuzz(int $value): bool{ if($value % 5 === 0){ return true; } return false; } public function calculateFizzBuzz(int $number): bool{ if($this->isFizz($number) && $this->isBuzz($number)){ echo "FizzBuzz \n"; return true; } if($this->isFizz($number)){ echo "Fizz \n"; return true; } if($this->isBuzz($number)){ echo "Buzz \n"; return true; } echo $number . "\n"; return true; } public function firstNFizzbuzz(int $maxValue): void{ $startValue = 1; while($startValue <= $maxValue){ $this->calculateFizzBuzz($startValue); $startValue++; } } }

Pretty straightforward. The firstNFizzbuzz() method prints the results of Fizzbuzz for a $maxValue of numbers. It does this by calling the calculateFizzBuzz() method recursively.

Next, let's write our command. Create a FizzCommand.php file with the following contents:

Continue reading %Symfony Console Beyond the Basics – Helpers and Other Tools%

Kategóriák: IT Hírek

Rapid Development of Zend Expressive Modules

h, 09/04/2017 - 18:00

I have learned a few tricks when writing Zend Expressive modules that I want to share with you.

Please follow the previous post first to set up a proper working environment. I explained how to install and configure Zend Expressive with Doctrine, Gulp, and an abstract reflection factory - it'll take a total of 10 minutes.

In this tutorial, we'll build a simple read-only blog module (a page listing blog posts from a database) in minutes, demonstrating the kind of rapid development one is capable of with Zend Expressive.

Module Setup

Run this command from your expressive app to get started:

./vendor/bin/expressive module:create Blog

This will generate some base code for a Blog module and will register your module automatically with your application. It will also register your module with the Composer autoloader.

Doctrine Entity and Database Table

Let's make our Blog entity and database tables. First, we need to let our application know that this module provides Doctrine entities.

Open src/Blog/src/ConfigProvider.php and add the following:

public function __invoke() { return [ 'dependencies' => $this->getDependencies(), 'doctrine' => $this->getDoctrine(), 'templates' => $this->getTemplates(), ]; } /** * @return array */ public function getDoctrine(): array { return [ 'driver' => [ 'orm_default' => [ 'drivers' => [ 'Blog\Entity' => 'blog_entity', ], ], 'blog_entity' => [ 'class' => \Doctrine\ORM\Mapping\Driver\SimplifiedYamlDriver::class, 'cache' => 'array', 'paths' => [ dirname(__DIR__) . '/config/doctrine' => 'Blog\Entity', ], ], ], ]; }

Create a blog post entity config at src/Blog/config/doctrine/BlogPost.orm.yml:

--- Blog\Entity\BlogPost: type: entity table: blog_post id: id: type: integer generator: strategy: AUTO fields: title: type: string length: 255 content: type: string length: 16777215

Then, run ./vendor/bin/doctrine orm:generate-entities src.

Continue reading %Rapid Development of Zend Expressive Modules%

Kategóriák: IT Hírek

Rapid Enterprise App Development with Zend Expressive

h, 08/28/2017 - 18:00

If you've ever done a Zend Framework quick start, you've probably never worked in Zend Framework. The quick start has historically been anything but quick, and it's easy to lose interest and move on to the next thing.

Zend Expressive greatly improves upon this experience with the wizard driven composer create-project command. However, it can still be daunting to set up because there are so many choices to make up front. This tutorial guides you through my recommended setup for rapid development which will
yield an enterprise level, robust application.

This tutorial is not about setting up your environment, so I am going to assume that you have a good working environment like Homestead Improved.

Project Setup

Start your project by running the following command your the folder where you keep your projects (Code on Homestead Improved):

composer create-project zendframework/zend-expressive-skeleton expressive

You will be prompted to make a few decisions along the way. Use these answers:

  • What type of installation would you like?
    • Modular
  • Which container do you want to use for dependency injection?
    • Zend ServiceManager
  • Which router do you want to use?
    • Zend Router
  • Which template engine do you want to use?
    • Twig
  • Which error handler do you want to use during development?
    • Whoops
  • Please select which config file you wish to inject 'Zend\Validator\ConfigProvider' into?
    • config/config.php
  • Remember this option for other packages of the same type?
    • y

Then, run these commands:

cd expressive && git init && git config color.ui true && git add . && git commit -m "Initial commit" && chmod -R +w data;

This initializes a repository in the newly created folder and makes the data folder writable.

Then, start up a php server for testing with

composer serve

... and browse to http://localhost:8080 or just visit the VM's IP or virtual host if you're using Homestead Improved.

Understanding Expressive

Expressive's folder structure looks like this:

bin/ config/ data/ cache/ public/ index.php src/ App test/ AppTest vendor/

Most of it is self explanatory. Expressive provides an App module by default. You can put all your code in here, or build separate modules as you build larger features.

Continue reading %Rapid Enterprise App Development with Zend Expressive%

Kategóriák: IT Hírek

ReactJS in PHP: Writing Compilers Is Easy and Fun!

p, 08/25/2017 - 18:00

I used to use an extension called XHP. It enables HTML-in-PHP syntax for generating front-end markup. I reached for it recently, and was surprised to find that it was no longer officially supported for modern PHP versions.

So, I decided to implement a user-land version of it, using a basic state-machine compiler. It seemed like it would be a fun project to do with you!

The code for this tutorial can be found on Github.

Creating Compilers

Many developers avoid writing their own compilers or interpreters, thinking that the topic is too complex or difficult to explore properly. I used to feel like that too. Compilers can be difficult to make well, and the topic can be incredibly complex and difficult. But, that doesn’t mean you can’t make a compiler.

Making a compiler is like making a sandwich. Anyone can get the ingredients and put it together. You can make a sandwich. You can also go to chef school and learn how to make the best damn sandwich the world has ever seen. You can study the art of sandwich making for years, and people can talk about your sandwiches in other lands. You’re not going to let the breadth and complexity of sandwich-making prevent you from making your first sandwich, are you?

Compilers (and interpreters) begin with humble string manipulation and temporary variables. When they’re sufficiently popular (or sufficiently slow) then the experts can step in; to replace the string manipulation and temporary variables with unicorn tears and cynicism.

At a fundamental level, compilers take a string of code and run it through a couple of steps:

  1. The code is split into tokens – meaningful characters and sub-strings – which the compiler will use to derive meaning. The statement if (isEmergency) alert("there is an emergency") could be considered to contain tokens like if, isEmergency, alert, and "there is an emergency"; and these all mean something to the compiler.

    The first step is to split the entire source code up into these meaningful bits, so that the compiler can start to organize them in a logical hierarchy, so it knows what to do with the code.

  2. The tokens are arranged into the logical hierarchy (sometimes called an Abstract Syntax Tree) which represents what needs to be done in the program. The previous statement could be understood as “Work out if the condition (isEmergency) evaluates to true. If it does, run the function (alert) with the parameter ("there is an emergency")”.

Using this hierarchy, the code can be immediately executed (in the case of an interpreter or virtual machine) or translated into other languages (in the case of languages like CoffeeScript and TypeScript, which are both compile-to-Javascript languages).

In our case, we want to maintain most of the PHP syntax, but we also want to add our own little bit of syntax on top. We could create a whole new interpreter…or we could preprocess the new syntax, compiling it to syntactically valid PHP code.

I’ve written about preprocessing PHP before, and it’s my favorite approach to adding new syntax. In this case, we need to write a more complex script; so we’re going to deviate from how we’ve previously added new syntax.

Generating Tokens

Let’s create a function to split code into tokens. It begins like this:

function tokens($code) { $tokens = []; $length = strlen($code); $cursor = 0; while ($cursor < $length) { if ($code[$cursor] === "{") { print "ATTRIBUTE STARTED ({$cursor})" . PHP_EOL; } if ($code[$cursor] === "}") { print "ATTRIBUTE ENDED ({$cursor})" . PHP_EOL; } if ($code[$cursor] === "<") { print "ELEMENT STARTED ({$cursor})" . PHP_EOL; } if ($code[$cursor] === ">") { print "ELEMENT ENDED ({$cursor})" . PHP_EOL; } $cursor++; } } $code = ' <?php $classNames = "foo bar"; $message = "hello world"; $thing = ( <div className={() => { return "outer-div"; }} nested={<span className={"nested-span"}>with text</span>} > a bit of text before <span> {$message} with a bit of extra text </span> a bit of text after </div> ); '; tokens($code); // ELEMENT STARTED (5) // ELEMENT STARTED (95) // ATTRIBUTE STARTED (122) // ELEMENT ENDED (127) // ATTRIBUTE STARTED (129) // ATTRIBUTE ENDED (151) // ATTRIBUTE ENDED (152) // ATTRIBUTE STARTED (173) // ELEMENT STARTED (174) // ATTRIBUTE STARTED (190) // ATTRIBUTE ENDED (204) // ELEMENT ENDED (205) // ELEMENT STARTED (215) // ELEMENT ENDED (221) // ATTRIBUTE ENDED (222) // ELEMENT ENDED (232) // ELEMENT STARTED (279) // ELEMENT ENDED (284) // ATTRIBUTE STARTED (302) // ATTRIBUTE ENDED (311) // ELEMENT STARTED (350) // ELEMENT ENDED (356) // ELEMENT STARTED (398) // ELEMENT ENDED (403)

This is from tokens-1.php

We’re off to a good start. By stepping through the code, we can check to see what each character is (and identify the ones that matter to us). We’re seeing, for instance, that the first element opens when we encounter a < character, at index 5. The first element closes at index 210.

Unfortunately, that first opening is being incorrectly matched to <?php. That’s not an element in our new syntax, so we have to stop the code from picking it out:

preg_match("#^</?[a-zA-Z]#", substr($code, $cursor, 3), $matchesStart); if (count($matchesStart)) { print "ELEMENT STARTED ({$cursor})" . PHP_EOL; } // ... // ELEMENT STARTED (95) // ATTRIBUTE STARTED (122) // ELEMENT ENDED (127) // ATTRIBUTE STARTED (129) // ATTRIBUTE ENDED (151) // ATTRIBUTE ENDED (152) // ATTRIBUTE STARTED (173) // ELEMENT STARTED (174) // ...

This is from tokens-2.php

Instead of checking only the current character, our new code checks three characters: if they match the pattern <div or </div, but not <?php or $num1 < $num2.

There’s another problem: our example uses arrow function syntax, so => is being matched as an element closing sequence. Let’s refine how we match element closing sequences:

preg_match("#^=>#", substr($code, $cursor - 1, 2), $matchesEqualBefore); preg_match("#^>=#", substr($code, $cursor, 2), $matchesEqualAfter); if ($code[$cursor] === ">" && !$matchesEqualBefore && !$matchesEqualAfter) { print "ELEMENT ENDED ({$cursor})" . PHP_EOL; } // ... // ELEMENT STARTED (95) // ATTRIBUTE STARTED (122) // ATTRIBUTE STARTED (129) // ATTRIBUTE ENDED (151) // ATTRIBUTE ENDED (152) // ATTRIBUTE STARTED (173) // ELEMENT STARTED (174) // ...

This is from tokens-3.php

As with JSX, it would be good for attributes to allow dynamic values (even if those values are nested JSX elements). There are a few ways we could do this, but the one I prefer is to treat all attributes as text, and tokenize them recursively. To do this, we need to have a kind of state machine which tracks how many levels deep we are in an element and attribute. If we’re inside an element tag, we should trap the top level {…} as a string attribute value, and ignore subsequent braces. Similarly, if we’re inside an attribute, we should ignore nested element opening and closing sequences:

function tokens($code) { $tokens = []; $length = strlen($code); $cursor = 0; $elementLevel = 0; $elementStarted = null; $elementEnded = null; $attributes = []; $attributeLevel = 0; $attributeStarted = null; $attributeEnded = null; while ($cursor < $length) { $extract = trim(substr($code, $cursor, 5)) . "..."; if ($code[$cursor] === "{" && $elementStarted !== null) { if ($attributeLevel === 0) { print "ATTRIBUTE STARTED ({$cursor}, {$extract})" . PHP_EOL; $attributeStarted = $cursor; } $attributeLevel++; } if ($code[$cursor] === "}" && $elementStarted !== null) { $attributeLevel--; if ($attributeLevel === 0) { print "ATTRIBUTE ENDED ({$cursor})" . PHP_EOL; $attributeEnded = $cursor; } } preg_match("#^</?[a-zA-Z]#", substr($code, $cursor, 3), $matchesStart); if (count($matchesStart) && $attributeLevel < 1) { print "ELEMENT STARTED ({$cursor}, {$extract})" . PHP_EOL; $elementLevel++; $elementStarted = $cursor; } preg_match("#^=>#", substr($code, $cursor - 1, 2), $matchesEqualBefore); preg_match("#^>=#", substr($code, $cursor, 2), $matchesEqualAfter); if ( $code[$cursor] === ">" && !$matchesEqualBefore && !$matchesEqualAfter && $attributeLevel < 1 ) { print "ELEMENT ENDED ({$cursor})" . PHP_EOL; $elementLevel--; $elementEnded = $cursor; } if ($elementStarted && $elementEnded) { // TODO $elementStarted = null; $elementEnded = null; } $cursor++; } } // ... // ELEMENT STARTED (95, <div...) // ATTRIBUTE STARTED (122, {() =...) // ATTRIBUTE ENDED (152) // ATTRIBUTE STARTED (173, {<spa...) // ATTRIBUTE ENDED (222) // ELEMENT ENDED (232) // ELEMENT STARTED (279, <span...) // ELEMENT ENDED (284) // ELEMENT STARTED (350, </spa...) // ELEMENT ENDED (356) // ELEMENT STARTED (398, </div...) // ELEMENT ENDED (403)

This is from tokens-4.php

We’ve added new $attributeLevel, $attributeStarted, and $attributeEnded variables; to track how deep we are in the nesting of attributes, and where the top-level starts and ends. Specifically, if we’re at the top level when an attribute’s value starts or ends, we capture the current cursor position. Later, we’ll use this to extract the string attribute value and replace it with a placeholder.

We’re also starting to capture $elementStarted and $elementEnded (with $elementLevel fulfilling a similar role to $attributeLevel) so that we can capture a full element opening or closing tag. In this case, $elementEnded doesn’t refer to the closing tag but rather the closing sequence of characters of the opening tag. Closing tags are treated as entirely separate tokens…

After extracting a small substring after the current cursor position, we can see elements and attributes starting and ending exactly where we expect. The nested control structures and elements are captured as strings, leaving only the top-level elements, non-attribute nested elements, and attribute values.

Let’s package these tokens up, associating attributes with the tags in which they are defined:

function tokens($code) { $tokens = []; $length = strlen($code); $cursor = 0; $elementLevel = 0; $elementStarted = null; $elementEnded = null; $attributes = []; $attributeLevel = 0; $attributeStarted = null; $attributeEnded = null; $carry = 0; while ($cursor < $length) { if ($code[$cursor] === "{" && $elementStarted !== null) { if ($attributeLevel === 0) { $attributeStarted = $cursor; } $attributeLevel++; } if ($code[$cursor] === "}" && $elementStarted !== null) { $attributeLevel--; if ($attributeLevel === 0) { $attributeEnded = $cursor; } } if ($attributeStarted && $attributeEnded) { $position = (string) count($attributes); $positionLength = strlen($position); $attribute = substr( $code, $attributeStarted + 1, $attributeEnded - $attributeStarted - 1 ); $attributes[$position] = $attribute; $before = substr($code, 0, $attributeStarted + 1); $after = substr($code, $attributeEnded); $code = $before . $position . $after; $cursor = $attributeStarted + $positionLength + 2 /* curlies */; $length = strlen($code); $attributeStarted = null; $attributeEnded = null; continue; } preg_match("#^</?[a-zA-Z]#", substr($code, $cursor, 3), $matchesStart); if (count($matchesStart) && $attributeLevel < 1) { $elementLevel++; $elementStarted = $cursor; } preg_match("#^=>#", substr($code, $cursor - 1, 2), $matchesEqualBefore); preg_match("#^>=#", substr($code, $cursor, 2), $matchesEqualAfter); if ( $code[$cursor] === ">" && !$matchesEqualBefore && !$matchesEqualAfter && $attributeLevel < 1 ) { $elementLevel--; $elementEnded = $cursor; } if ($elementStarted !== null && $elementEnded !== null) { $distance = $elementEnded - $elementStarted; $carry += $cursor; $before = trim(substr($code, 0, $elementStarted)); $tag = trim(substr($code, $elementStarted, $distance + 1)); $after = trim(substr($code, $elementEnded + 1)); $token = ["tag" => $tag, "started" => $carry]; if (count($attributes)) { $token["attributes"] = $attributes; } $tokens[] = $before; $tokens[] = $token; $attributes = []; $code = $after; $length = strlen($code); $cursor = 0; $elementStarted = null; $elementEnded = null; continue; } $cursor++; } return $tokens; } $code = ' <?php $classNames = "foo bar"; $message = "hello world"; $thing = ( <div className={() => { return "outer-div"; }} nested={<span className={"nested-span"}>with text</span>} > a bit of text before <span> {$message} with a bit of extra text </span> a bit of text after </div> ); '; tokens($code); // Array // ( // [0] => <?php // // $classNames = "foo bar"; // $message = "hello world"; // // $thing = ( // [1] => Array // ( // [tag] => <div className={0} nested={1}> // [started] => 157 // [attributes] => Array // ( // [0] => () => { return "outer-div"; } // [1] => <span className={"nested-span"}>with text</span> // ) // // ) // // [2] => a bit of text before // [3] => Array // ( // [tag] => <span> // [started] => 195 // ) // // [4] => {$message} with a bit of extra text // [5] => Array // ( // [tag] => </span> // [started] => 249 // ) // // [6] => a bit of text after // [7] => Array // ( // [tag] => </div> // [started] => 282 // ) // // )

This is from tokens-5.php

There’s a lot going on here, but it’s all just a natural progression from the previous version. We use the captured attribute start and end positions to extract the entire attribute value as one big string. We then replace each captured attribute with a numeric placeholder and reset the code string and cursor positions.

As each element closes, we associate all the attributes since the element was opened, and create a separate array token from the tag (with its placeholders), attributes and starting position. The result may be a little harder to read, but it is spot on in terms of capturing the intent of the code.

So, what do we do about those nested element attributes?

Continue reading %ReactJS in PHP: Writing Compilers Is Easy and Fun!%

Kategóriák: IT Hírek

Are Bitwise Operators Still Relevant in Modern PHP?

h, 08/21/2017 - 18:00

Many of you probably scratched your heads reading this title. "Bitwhat?"

In this article, we'll look at what bitwise operators are, and whether or not their use is still relevant in this modern age of computing.

Example Use Case

I'd been trying to really understand bitwise operators for a long time, until an example made it click for me. So that's what we'll do - dive straight into an example.

Imagine you have a website on which a given user can have specific permissions. For example, a magazine like SitePoint:

  • a author can CRUD drafts, and and edit their profile.
  • an editor can, in addition to the above, CRUD drafts and finished posts, and CRUD author profiles.
  • an administrator can, in addition to the above, add administrator permissions.

Since a user can have multiple permissions, there are several ways of defining permissions in a database and the system using it.

The Double Join

Add roles, add permissions, attach permissions to roles in a join table, then create another join table and bind some roles to some users.

This approach creates four extra tables:

  • permissions
  • roles
  • permissions<->roles
  • roles<->users

Quite a bit of overhead. Imagine having to edit these or list them in the app regularly in some frequently visited lists. Only heavy caching would save this app from collapsing under heavy load.

One advantage, though, is that by defining roles really well with intricate permissions, you only have to stick users into roles and you're good - it keeps that join table light and fast.

The Single Join

Add permissions, add a join table, attach some permissions to some users

This approach creates two extra tables:

  • permissions
  • permissions<->users

Much less overhead than the previous example, but you have many more entries in the join table because a user can have a LOT of permissions (just the CRUD for drafting is 4 permissions on its own). With a lot of users and a lot of permissions, this table can get heavy quickly.

The Column Stampede

Add a column into the users table for each permission, then make its datatype a tinyint(1) (basically a boolean) to check the permission as "on" or "off".

Setting permissions for a user would then look something like this:

UPDATE `users` SET `editProfile` = 1, `deleteProfile` = 0, `createDraft` = 1, `publishDraft` = 0 ... WHERE `id` = 5

This approach adds no extra tables, but needlessly expands the table into gargantuan width, and requires a modification of the database every time a new permission is added. It's a fine approach for when you know you'll have at most two or three permissions for the foreseeable future, but shouldn't be used for anything more than that.

However, because the list of columns, when looked at from afar, resembles a binary number (1010), this approach is an excellent segway into another...

The Bitwise Approach

Before we dive deeper into this approach, let's have a crash course in binary.

Binary Numbers

All computers store data as binary: 0 or 1. So, the number 14 is actually stored as: 1110. How so?

Binary numbers are evaluated from right to left when calculating their value, just like real numbers. So the number 1337 means:

  • 1 x 7
    • 3 x 10
    • 3 x 100
    • 1 x 1000

Because each digit in the decimal system (base 10) gets multiplied by 10. The first one is 1, the next one is 10, the next after that 100, the next 1000, etc.

Continue reading %Are Bitwise Operators Still Relevant in Modern PHP?%

Kategóriák: IT Hírek

How to Master Your API Workflow with Postman

cs, 08/17/2017 - 18:00

Building good APIs is hard, and anyone who had the chance to do so can relate to this. A project can easily grow to become a mess. One can keep trying to adopt an approach to make it more enjoyable, like trying a documentation-first workflow, but something always feels clumsy.

I was trying out Postman lately, a tool we've briefly covered before, and I discovered that they're doing a great job by providing an integrated environment for different components of an API, like authorization, testing, documentation, versioning, etc.

In this article, we're going to walk through the different features that Postman provides and how we can organize them to make our API workflow less painful.

Making Requests

The first step to discovering Postman is making a simple API request and seeing the response.

From the screenshot above we can see that the view incorporates different elements. The top bar contains the request method (in this case GET), and right next to it we specify the endpoint to make the request to. If the request has some params we can specify them by clicking the Params button, which will open a table for entering them. When ready, we can click the send button to initiate the request.

The next section contains five tabs:

  • Authorization: Specifies how to authorize the request. Like Basic Auth, OAuth2, etc.
  • Headers: Any header that should be sent with the request. Like content-type, Authorization, etc.
  • Body: Request body for request types like Post, PUT, etc.
  • Pre-request Script: JS code to be executed before performing the request. (More about this later)
  • Tests: JS code to validate the response payload.

The bottom section contains all the details about the response (status, time and size). The four tabs are self explanatory, except for Tests which contains the tests result if we have any (more about this later).


Postman supports all types of authorization, as we saw above. We're going to focus on token authorization set via header. You can read more about authorization here.

The header format that Postman will use is:

Authorization: Bearer <TOKEN>

Now, we'll go ahead and get a token we can use for our future requests. I'm testing on an in-development application of mine, but you can use any application you have lying around, or set one up quickly with Laravel as described here.

Not familiar with Laravel? Try our premium course - it's a comprehensive introduction, and you'll get the upcoming newer version of it automatically if you grab it in its current iteration!

At this point, we have a token to be used for querying data from the API as an authenticated user. However, copying and pasting the token for every request is tedious!

Environment Variables

This nice feature alleviates the pain of copy/paste and groups our variables in one place. An environment is an execution context: we may have a local environment, testing, staging, etc.

Postman has different scopes for variables:

  • Global
  • Environment
  • Local
  • Data

The global variables are available everywhere, separately from the selected environment. You can read more in the documentation.

We need at least three variables for now:

  • domain: current active subdomain company1, company2, etc.
  • url: our app URL.
  • token: Token for future authentication.

Now we can update our login endpoint to use our new environment, but first we need to select our environment from the top right selection box.

We can use our variables inside the URL, parameters, tests, etc. But how are we going to use our token now?

Well, we have two choices. The first is to copy the token and paste it as a value for the token variable (this is what we're trying to avoid).

The second method is to set this via code executed after the request is done. This is where tests come into play, let's find out what they are!

Continue reading %How to Master Your API Workflow with Postman%

Kategóriák: IT Hírek

PHPBot – Can a PHP Bot Help You Look up Documentation Faster?

k, 08/15/2017 - 18:00

I came across PHPBot the other day (not to be confused with Botman or Fondbot) - a "chatbot" which helps you look up PHP manual entries and generates example code for them.

If you follow me on Twitter, you know I don't believe in chatbots being chatbots - as someone who's built some for both commercial purposes and personal ones, and as someone who was around in the age of IRC auto-responding scripts, I see chatbots as a little more than a marketing fad designed to impress today's Snapchat generation. However, every now and then one will appear that's actually useful. Could this be the one? Let's take a look.


The idea is that you use the text input on the page to enter a PHP related term (like "ksort") and the application will reply with an explanation and some example code, like so:

ksort input" />

Different queries will produce different responses. So, if you ask it for echo:

echo" />

Continue reading %PHPBot – Can a PHP Bot Help You Look up Documentation Faster?%

Kategóriák: IT Hírek

Duolingo for PHP – How Much PHP Can Apps like Mimo Teach?

szo, 08/12/2017 - 18:00

Mimo is an app which claims to teach coding "on the go". As a long time Duolingo user, I could relate to this approach - particularly as I was becoming increasingly interested in achieving some degree of basic Swift literacy in the coming weeks.

In an attempt to objectively assess the app's ability to teach people new things, I decided to look at what it's offering and see if it's something I can recommend to people just starting out with a new language.


The application is free for the first one or two lessons of every course, at which point you're asked to either spread the word on social media, or pay to unlock the next one. The next one after that, however, has to be paid for.

Like Duolingo, Mimo uses streaks to gamify learning, keeping track of how many consecutive days you've been using it. To further gamify things, the app features a bunch of badges and achievements to collect, though the real killer feature would be a Duolingo-like "follow user" approach which also allows for direct communication on certain lesson slides.

Obviously, there's quite a bit of collecting and achieving to be done in Mimo!

The app lets users set their own desired learning frequency, but all this does is dictate how often the app will remind you to use it - it has no other effects on the app in terms of course length or content.

It's a sad world where 20 minutes of learning per day is considered insane, isn't it?

At the time of writing, Premium access costs a whopping 54.99 GBP per year, or 4.58 GBP per month, but only payable annually. It seems the app's developers are well aware of the fact that most people abandon learning and self-improvement apps on mobile after a month.

The biggest problem is that with what's offered for free, it's hard to justify spending this amount.


The application offers individual language courses - PHP, HTML, Swift, "Hacking", etc. but also course tracks that include several courses in order, aimed at teaching a bigger whole. For example, the Make a Website course has the sub courses: Programming, Programming 2, HTML, CSS, JavaScript, etc.

This course doesn't whet the appetite, however - what's covered is a very basic and language agnostic concept of variables, and then the paid content begins. That's 2 out of 43 lessons free - hardly enough content to make an educated purchasing decision. Also, despite what seems like a lot of content (43 lessons), it should be noted that lessons are typically 10-15 minutes long.

Continue reading %Duolingo for PHP – How Much PHP Can Apps like Mimo Teach?%

Kategóriák: IT Hírek