Sitepoint PHP

Automatikusan javítja a bugokat a Facebook új mesterséges intelligenciája

2 hónap 3 hét ago
A Facebook csütörtökön egy olyan új fejlesztőeszközt jelentett be, ami a programozókat az egyik legidegőrlőbb feladatuktól szabadítja meg - vagy legalábbis drasztikusan csökkenti az időt, amit arra kénytelenek fecsérelni. A közösségi oldal ugyanis SapFix néven egy olyan mesterséges intelligenciát mutatott be, ami automatikusan képes felismerni és kijavítani a programokban megbújó bugokat.

Megjelent az ingyenes Delphi Community Edition!

2 hónap 3 hét ago
[Ker-Soft] Új fejlesztők, startupok és hobbisták számára az Embarcadero egy új C++ Builder, Delphi kiadást kínál. A termék a fenti fejlesztőkörnyezetek Professional kivitelével megegyező funkcionalitással rendelkezik, azonban ingyenesen letölthető a cég weboldaláról. A programot olyan fejlesztők használhatják, akiknek az éves nettó bevétele nem éri el az 5000 dollárt (kb. 1 450 000 forintot).

Megérkezett a WinForms támogatás a .NET Core-ba - és villámgyors lett

2 hónap 3 hét ago
A Microsoft a napokban elérhetővé tette Windows Forms keretrendszer a .NET Core feletti futásra tervezett változatának egy előzetes kiadását. Az új implementáció nem csak a Windows-on kívül fogja lehetővé tenni a WinForms-ra épülő asztali alkalmazások működését, de a jelek szerint jelentős előrelépést hozhat még az eredeti platformon is.

Játékfejlesztő akar lenni a legtöbb informatikus hallgató

2 hónap 4 hét ago
[eNET] Milyen az informatikus szakokon tanuló egyetemisták elképzelése a karrierjükről, mivel szeretnének foglalkozni? Erre és az ehhez kapcsolódó kérdésekre kereste a választ az eNET 2018 májusi kutatása, amiből többek között kiderült, hogy a felmérésben részt vevő informatikus hallgatók háromnegyede nem dolgozik, ami részben ellentmond annak az iparági vélekedésnek, hogy az informatikus tanulókat ma már a tanulópadban „levadásszák”.

Havi 937 ezer és 2 millió között keres a fejlesztők legtöbbje

3 hónap ago
A Stack Overflow a napokban jelentette be, hogy frissítette fizetési kalkulátorát, aminek segítségével a fejlesztők és a munkaadók egyaránt megtudhatják, hogy mik az informatikusak tipikus javadalmazásai különböző munkakörökben és régiókban. A cég által begyűjtött adatok szerint a legtöbb fejlesztő 937 ezer és 2 millió forintnak megfelelő összeget keres havonta - bár az, hogy konkrétan mennyit, erősen függ a konkrét munkakörtől, a tapasztalattól, valamint a földrajzi helyzettől is.

A pénznél is nehezebben találnak szoftverfejlesztőket a cégek

3 hónap ago
Amint a világ egésze egyre inkább a digitális folyamatokra áll át, egyre jobban felértékelődik az informatikusok, azon belül is a fejlesztők szerepe. Annyira, hogy mostanság már a cégek a tőkéhez történő hozzáférésnél is nagyobb problémának tartják a megfelelő fejlesztők szerződétetését, a tehetséges programozók potenciális elvesztését pedig az egyik legnagyobb fenyegetésként értékelik további sikerükre.

Már a felhasználóikkal is tesztel(tet)hetik alkalmazásaikat az Android-fejlesztők

3 hónap ago
A Google a hét közepén egy igen hasznos újítást jelentett be azon fejlesztők számára, akik a cég Google Play áruházán keresztül árulják vagy terjesztik app-jaikat. Ennek segítségével ugyanis az illetők mostantól könnyebben el tudják kerülni, hogy alkalmazásaik új verzióban bent maradt esetleges hibák már csak akkor derüljenek ki, amikor a frissítést már szinte minden felhasználójuk megkapta.

Történelmi átrendeződés történt a nyelvek népszerűségi rangsorában

3 hónap ago
A programozási nyelvek egyik legrégebbi és legismertebb népszerűségi listáját vezetője TIOBE a napokban ismét frissítette rangsorát, ami ezúttal egy igen érdekes fejleményt hozott. A lista közel két évtizedes történetében ugyanis először fordult elő, hogy a C++ teljesen lecsúszott a dobogóról, mert a tavaly még csak ötödik, de azóta egyre felfelé kúszó Python kifúrta őt onnan.

Nagy változás jön a fejlesztők számára az Apple alkalmazásboltjába

3 hónap ago
Az Apple a napokban ismét figyelmeztette az alkalmazásboltjában programokat áruló fejlesztőket, hogy hamarosan egy igen komoly szigorítás lép életbe az online áruházban. Ennek értelmében az App Store-ba a jövőben csak olyan programok jelenhetnek majd meg, illetve frissülhetnek, amik pontos adatvédelmi szabályzattal is rendelkeznek.

15 magyar lány jutott ki a Microsoft nyári fejlesztői táborába

3 hónap 1 hét ago
[Microsoft] A lányok, nők képzése és alkalmazása nélkül aligha enyhíthető az informatikai területet kiemelten sújtó hazai munkaerőhiány – véli a Microsoft, aki először szervezett nemzetközi nyáritábort az informatika iránt érdeklődő, tinédzser lányoknak. Athénba 15 magyar diákot is meghívtak.

Hivatalos Windows buildeket kap mostantól a curl

3 hónap 1 hét ago
Az egyik legnépszerűbb internetes segédprogram, a curl mostantól Windows alá is pofonegyszerűen és hivatalos forrásból lesz beszerezhető. A szoftver készítői ugyanis az eddigi Linux és Unix mellett immár Windows-hoz is ki fognak rendszeresen adni build-eket honlapjukon.

Már WebAssembly-re is tud fordítani a Go legújabb kiadása

3 hónap 2 hét ago
A Google - fél évvel az előző verzió kiadása után - a napokban újabb kiadást tett elérhetővé Go nyelvéből. A Go 1.11 egyik legfontosabb újítása, hogy már egy olyan fordítóval érkezik, ami a natív binárisok mellett olyan WebAssembly kimenetet is képes generálni, aminek révén akár böngészőkben is futtathatóvá válnak a vele készült programok.

CSS Optimization Tools for Boosting PWA Performance

3 hónap 2 hét ago

When styling websites or PWAs with CSS, you should analyze how CSS resources will affect performance. In this tutorial, we’ll use various tools and related techniques to help build a better PWA by focusing on CSS performance. Specifically, we’ll remove the unused CSS, inline the critical path CSS, and minify the resulting code.

The techniques can also be used to improve the performance of general websites and apps. We’ll be focusing on CSS performance for PWAs since they should be fast and feel native on user devices.

Progressive web apps (PWAs) are web experiences that bring the best of both worlds: native mobile apps (installable from a store) and web apps (reachable from public URLs). Users can start using the application right away from their web browser without waiting for a download, installing, or needing extra space in the device.

Service workers and caching allow the app to work offline and when network connectivity is poor. Over time, the app could become faster as more assets are cached locally. PWAs can also be installed as an icon on the home screen and launched full-screen with an initial splash screen.

The Demo PWA to Audit

Before learning how to audit a PWA for any CSS issues, you can get the code of a simple website with PWA features from this GitHub repository. The PWA uses an unminified version of Bootstrap v4 for CSS styling and displays a set of posts fetched from a statically generated JSON API. You can also use the hosted version of this demo, since learning how to build a PWA is beyond the scope of this tutorial.

PWAs are simply web apps with additional features, including these elements:

  • A manifest file. A JSON file provides the browser with information about the web application such as name, description, icons, the start URL, display factors etc.
  • A service worker. A JavaScript file is used to cache the application shell (the minimum required HTML, CSS, and JavaScript for displaying the user interface) and proxying all network requests.
  • HTTPS. PWAs must be served from a secure origin.

Here’s a screen shot of the application shell:

A screen shot of the application with data:

Auditing with Google’s Lighthouse

Lighthouse is an open-source auditing tool developed by Google. It can be used to improve the performance, accessibility and SEO of websites and progressive web apps.

Lighthouse can be accessed from the Audit tab in Chrome DevTools, programatically as a Node.js module and also as a CLI tool. It takes a URL and runs a series of audits to generate a report with optimization suggestions.

You can apply different techniques either manually or using tools. This article describes how such tools can be used to remove redundant styles, extract the above-the-fold critical CSS, load the remaining CSS with JavaScript, and minify the resulting code.

Launch Chrome, visit the PWA address https://www.techiediaries.com/unoptimizedpwa/ and open Developer Tools (CTRL-Shift-I). From the Developer Tools, click the Audits panel:

Next, click on Perform an audit…. A dialog will prompt you for the types of audit you want to perform. Keep all types selected and click the Run audit button.

Wait for Lighthouse to complete the auditing process and generate a report:

The scores are calculated in a simulated environment. You’re unlikely to get the same results on your machine because they depend on hardware and network capabilities.

From the report, you can see a timeline which visually shows how the page is loaded. First meaningful paint, First Interactive and Consistently Interactive are key time points that describe how fast the page loaded. Our goal is to optimize these metrics according to the Critical Rendering Path.

The post CSS Optimization Tools for Boosting PWA Performance appeared first on SitePoint.

Ahmed Bouchefra

How to Install PHP on Windows

4 hónap ago

We've previously shown you how to get a working local installation of Apache on your Windows PC. In this article, we'll show how to install PHP 5 as an Apache 2.2 module.

Why PHP?

PHP remains the most widespread and popular server-side programming language on the web. It is installed by most web hosts, has a simple learning curve, close ties with the MySQL database, and an excellent collection of libraries to cut your development time. PHP may not be perfect, but it should certainly be considered for your next web application. Both Yahoo and Facebook use it with great success.

Why Install PHP Locally?

Installing PHP on your development PC allows you to safely create and test a web application without affecting the data or systems on your live website. This article describes PHP installation as a module within the Windows version of Apache 2.2. Mac and Linux users will probably have it installed already.

All-in-One packages

There are some excellent all-in-one Windows distributions that contain Apache, PHP, MySQL and other applications in a single installation file, e.g. XAMPP (including a Mac version), WampServer and Web.Developer. There is nothing wrong with using these packages, although manually installing Apache and PHP will help you learn more about the system and its configuration options.

The PHP Installer

Although an installer is available from php.net, I would recommend the manual installation if you already have a web server configured and running.

The post How to Install PHP on Windows appeared first on SitePoint.

Craig Buckler

HTTP/2: Background, Performance Benefits and Implementations

5 hónap ago

On top of the infrastructure of the internet --- or the physical network layers --- sits the Internet Protocol, as part of the TCP/IP, or transport layer. It's the fabric underlying all or most of our internet communications.

A higher level protocol layer that we use on top of this is the application layer. On this level, various applications use different protocols to connect and transfer information. We have SMTP, POP3, and IMAP for sending and receiving emails, IRC and XMPP for chatting, SSH for remote sever access, and so on.

The best-known protocol among these, which has become synonymous with the use of the internet, is HTTP (hypertext transfer protocol). This is what we use to access websites every day. It was devised by Tim Berners-Lee at CERN as early as 1989. The specification for version 1.0 was released in 1996 (RFC 1945), and 1.1 in 1999.

The HTTP specification is maintained by the World Wide Web Consortium, and can be found at http://www.w3.org/standards/techs/HTTP.

The first generation of this protocol --- versions 1 and 1.1 --- dominated the web up until 2015, when HTTP/2 was released and the industry --- web servers and browser vendors --- started adopting it.

HTTP/1

HTTP is a stateless protocol, based on a request-response structure, which means that the client makes requests to the server, and these requests are atomic: any single request isn't aware of the previous requests. (This is why we use cookies --- to bridge the gap between multiple requests in one user session, for example, to be able to serve an authenticated version of the website to logged in users.)

Transfers are typically initiated by the client --- meaning the user's browser --- and the servers usually just respond to these requests.

We could say that the current state of HTTP is pretty "dumb", or better, low-level, with lots of "help" that needs to be given to the browsers and to the servers on how to communicate efficiently. Changes in this arena are not that simple to introduce, with so many existing websites whose functioning depends on backward compatibility with any introduced changes. Anything being done to improve the protocol has to be done in a seamless way that won't disrupt the internet.

In many ways, the current model has become a bottleneck with this strict request-response, atomic, synchronous model, and progress has mostly taken the form of hacks, spearheaded often by the industry leaders like Google, Facebook etc. The usual scenario, which is being improved on in various ways, is for the visitor to request a web page, and when their browser receives it from the server, it parses the HTML and finds other resources necessary to render the page, like CSS, images, and JavaScript. As it encounters these resource links, it stops loading everything else, and requests specified resources from the server. It doesn't move a millimeter until it receives this resource. Then it requests another, and so on.

The number of requests needed to load world's biggest websites is often in couple of hundreds.

This includes a lot of waiting, and a lot of round trips during which our visitor sees only a white screen or a half-rendered website. These are wasted seconds. A lot of available bandwidth is just sitting there unused during these request cycles.

CDNs can alleviate a lot of these problems, but even they are nothing but hacks.

As Daniel Stenberg (one of the people working on HTTP/2 standardization) from Mozilla has pointed out, the first version of the protocol is having a hard time fully leveraging the capacity of the underlying transport layer, TCP.
Users who have been working on optimizing website loading speeds know this often requires some creativity, to put it mildly.

Over time, internet bandwidth speeds have drastically increased, but HTTP/1.1-era infrastructure didn't utilize this fully. It still struggled with issues like HTTP pipelining --- pushing more resources over the same TCP connection. Client-side support in browsers has been dragging the most, with Firefox and Chrome disabling it by default, or not supporting it at all, like IE, Firefox version 54+, etc.
This means that even small resources require opening a new TCP connection, with all the bloat that goes with it --- TCP handshakes, DNS lookups, latency… And due to head-of-line blocking, the loading of one resource results in blocking all other resources from loading.

A synchronous, non-pipelined connection vs a pipelined one, showing possible savings in load time.

Some of the optimization sorcery web developers have to resort to under the HTTP/1 model to optimize their websites include image sprites, CSS and JavaScript concatenation, sharding (distributing visitors' requests for resources over more than one domain or subdomain), and so on.

The improvement was due, and it had to solve these issues in a seamless, backward-compatible way so as not to interrupt the workings of the existing web.

SPDY

In 2009, Google announced a project that would become a draft proposal of a new-generation protocol, SPDY (pronounced speedy), adding support to Chrome, and pushing it to all of its web services in subsequent years. Then followed Twitter and server vendors like Apache, nginx with their support, Node.js, and later came Facebook, WordPress.com, and most CDN providers.

SPDY introduced multiplexing --- sending multiple resources in parallel, over a single TCP connection. Connections are encrypted by default, and data is compressed. First, preliminary tests in the SPDY white paper performed on the top 25 sites showed speed improvements from 27% to over 60%.

After it proved itself in production, SPDY version 3 became basis for the first draft of HTTP/2, made by the Hypertext Transfer Protocol working group httpbis in 2015.

HTTP/2 aims to address the issues ailing the first version of the protocol --- latency issues --- by:

It also aims to solve head-of-line blocking. The data it transfers is in binary format, improving its efficiency, and it requires encryption by default (or at least, this is a requirement imposed by major browsers).

Header compression is performed with the HPACK algorithm, solving the vulnerability in SPDY, and reducing web request sizes by half.

Server push is one of the features that aims to solve wasted waiting time, by serving resources to the visitor's browser before the browser requires it. This reduces the round trip time, which is a big bottleneck in website optimization.

Due to all these improvements, the difference in loading time that HTTP/2 brings to the table can be seen on this example page by imagekit.io.

Savings in loading time become more apparent the more resources a website has.

The post HTTP/2: Background, Performance Benefits and Implementations appeared first on SitePoint.

Tonino Jankov

Improving Performance Perception: On-demand Image Resizing

5 hónap 1 hét ago

Over a series of articles, we've been building a sample application --- a multi-image gallery blog --- for performance benchmarking and optimizations. At this point, our application serves the same image regardless of the resolution and screen size it's being served in. In this tutorial, we'll modify it to serve a resized version depending on display size.

Objective

There are two stages to this improvement.

  1. We need to make all images responsive wherever this might be useful. One place is the thumbnails on the home page and in the gallery pages, and another is the full-size image when an individual image is clicked in the gallery.
  2. We need to add resizing-logic to our app. The point is to generate a resized image on the fly as it's demanded. This will keep non-popular images from polluting our hard drive, and it'll make sure the popular ones are, on subsequent requests, served in optimal sizes.
Responsive Images?

As this post explains, images in the modern web are incredibly complex. Instead of just <img src="mypic.jpg"> from the olden days, we now have something crazy like this:

<picture> <source media="(max-width: 700px)" sizes="(max-width: 500px) 50vw, 10vw" srcset="stick-figure-narrow.png 138w, stick-figure-hd-narrow.png 138w"> <source media="(max-width: 1400px)" sizes="(max-width: 1000px) 100vw, 50vw" srcset="stick-figure.png 416w, stick-figure-hd.png 416w"> <img src="stick-original.png" alt="Human"> </picture>

A combination of srcset, picture and sizes is necessary in a scenario where you're doubtful that if you use the same image for a smaller screen size, the primary subject of the image may become too small in size. You want to display a different image (more focused on the primary subject) in a different screen size, but still want to display separate assets of the same image based on device-pixel ratio, and want to customize height and width of the image based on viewport.

Since our images are photos and we always want them to be in their default DOM-specified position filling up the maximum of their parent container, we have no need for picture (which lets us define an alternative source for a different resolution or browser support --- like trying to render SVG, then PNG if SVG is unsupported) or sizes (which lets us define which viewport portion an image should occupy). We can get away with just using srcset, which loads a different size version of the same image depending on the screen size.

Adding srcset

The first location where we encounter images is in home-galleries-lazy-load.html.twig, the partial template that renders the home screen's galleries list.

<a class="gallery__link" href="{{ url('gallery.single-gallery', {id: gallery.id}) }}"> <img src="{{ gallery.images.first|getImageUrl }}" alt="{{ gallery.name }}" class="gallery__leading-image card-img-top"> </a>

We can see here that the image's link is fetched from a Twig filter, which can be found in the src/Twig/ImageRendererExtension.php file. It takes the image's ID and the route's name (defined in the annotation in ImageController's serveImageAction route) and generates a URL based on that formula: /image/{id}/raw -> replacing {id} with the ID given:

public function getImageUrl(Image $image) { return $this->router->generate('image.serve', [ 'id' => $image->getId(), ], RouterInterface::ABSOLUTE_URL); }

Let's change that to the following:

public function getImageUrl(Image $image, $size = null) { return $this->router->generate('image.serve', [ 'id' => $image->getId() . (($size) ? '--' . $size : ''), ], RouterInterface::ABSOLUTE_URL); }

Now, all our image URLs will have --x as a suffix, where x is their size. This is the change we'll apply to our img tag as well, in the form of srcset. Let's change it to:

<a class="gallery__link" href="{{ url('gallery.single-gallery', {id: gallery.id}) }}"> <img src="{{ gallery.images.first|getImageUrl }}" alt="{{ gallery.name }}" srcset=" {{ gallery.images.first|getImageUrl('1120') }} 1120w, {{ gallery.images.first|getImageUrl('720') }} 720w, {{ gallery.images.first|getImageUrl('400') }} 400w" class="gallery__leading-image card-img-top"> </a>

If we refresh the home page now, we'll notice the srcset's new sizes listed:

This isn't going to help us much, though. If our viewport is wide, this will request full-size images, despite them being thumbnails. So instead of srcset, it's better to use a fixed small thumbnail size here:

<a class="gallery__link" href="{{ url('gallery.single-gallery', {id: gallery.id}) }}"> <img src="{{ gallery.images.first|getImageUrl('250') }}" alt="{{ gallery.name }}" class="gallery__leading-image card-img-top"> </a>

We now have thumbnails-on-demand, but which get cached and fetched when they're already generated.

Let's hunt down other srcset locations now.

In templates/gallery/single-gallery.html.twig, we apply the same fix as before. We're dealing with thumbnails, so let's just shrink the file by adding the size parameter into our getImageUrl filter:

<img src="{{ image|getImageUrl(250) }}" alt="{{ image.originalFilename }}" class="single-gallery__item-image card-img-top">

And now for the srcset implementation, finally!

The individual image views are rendered with a JavaScript modal window at the bottom of the same single-gallery view:

{% block javascripts %} {{ parent() }} <script> $(function () { $('.single-gallery__item-image').on('click', function () { var src = $(this).attr('src'); var $modal = $('.single-gallery__modal'); var $modalBody = $modal.find('.modal-body'); $modalBody.html(''); $modalBody.append($('<img src="' + src + '" class="single-gallery__modal-image">')); $modal.modal({}); }); }) </script> {% endblock %}

There's an append call which adds the img element into the modal's body, so that's where our srcset attribute must go. But since our image URLs are dynamically generated, we can't really call the Twig filter from within the script. One alternative is to add the srcset into the thumbnails and then use it in the JS by copying it from the thumb elements, but this would not only make the full-sized images load in the background of the thumbnails (because our viewport is wide), but it would also call the filter 4 times for each thumbnail, slowing things down. Instead, let's create a new Twig filter in src/Twig/ImageRendererExtension.php which will generate the full srcset attribute for each image.

public function getImageSrcset(Image $image) { $id = $image->getId(); $sizes = [1120, 720, 400]; $string = ''; foreach ($sizes as $size) { $string .= $this->router->generate('image.serve', [ 'id' => $image->getId() . '--' . $size, ], RouterInterface::ABSOLUTE_URL).' '.$size.'w, '; } $string = trim($string, ', '); return html_entity_decode($string); }

We mustn't forget to register this filter:

public function getFilters() { return [ new Twig_SimpleFilter('getImageUrl', [$this, 'getImageUrl']), new Twig_SimpleFilter('getImageSrcset', [$this, 'getImageSrcset']), ]; }

We have to add these values into a custom attribute, which we'll call data-srcset on each individual thumbnail:

<img src="{{ image|getImageUrl(250) }}" alt="{{ image.originalFilename }}" data-srcset=" {{ image|getImageSrcset }}" class="single-gallery__item-image card-img-top">

Now each individual thumbnail has a data-srcset attribute with the required srcset values, but this doesn't trigger because it's in a custom attribute, data to be used later.

The final step is updating the JS to take advantage of this:

{% block javascripts %} {{ parent() }} <script> $(function () { $('.single-gallery__item-image').on('click', function () { var src = $(this).attr('src'); var srcset = $(this).attr('data-srcset'); var $modal = $('.single-gallery__modal'); var $modalBody = $modal.find('.modal-body'); $modalBody.html(''); $modalBody.append($('<img src="' + src + '" srcset="" + srcset + '" class="single-gallery__modal-image">')); $modal.modal({}); }); }) </script> {% endblock %}

The post Improving Performance Perception: On-demand Image Resizing appeared first on SitePoint.

Bruno Skvorc

How to Boost Your Server Performance with Varnish

5 hónap 1 hét ago

Varnish Cache is an HTTP accelerator and reverse proxy developed by Danish consultant and FreeBSD core developer Poul-Henning Kamp, along with other developers at Norwegian Linpro AS. It was released in 2006.

According to Pingdom.com, a company focused on web performance, in 2012 Varnish was already famous among the world's top websites for its capacity to speed up web delivery, and it was being used by sites such as Wired, SlideShare, Zappos, SoundCloud, Weather.com, Business Insider, Answers.com, Urban Dictionary, MacRumors, DynDNS, OpenDNS, Lonely Planet, Technorati, ThinkGeek and Economist.com.

It is licensed under a two-clause BSD license. Varnish has a premium tier, Varnish Plus, focused on enterprise customers, which offers some extra features, modules, and support.

Although there are other solutions that also shine, Varnish is still a go-to solution that can dramatically improve website speed, reduce the strain on the web application server's CPU, and even serve as a protection layer from DDoS attacks. KeyCDN recommends deploying it on the origin server stack.

Varnish can sit on a dedicated machine in case of more demanding websites, and make sure that the origin servers aren't affected by the flood of requests.

At the time of this writing (November 2017), Varnish is at version 5.2.

How it Works

Caching in general works by keeping the pre-computed outputs of an application in memory, or on the disk, so that expensive computations don't have to be computed over and over on every request. Web Cache can be on the client (browser cache), or on the server. Varnish falls into the second category. It is usually configured so that it listens for requests on the standard HTTP port (80), and then serves the requested resource to the website visitor.

The first time a certain URL and path are requested, Varnish has to request it from the origin server in order to serve it to the visitor. This is called a CACHE MISS, which can be read in HTTP response headers, depending on the Varnish setup.

According to the docs,

when an object, any kind of content i.e. an image or a page, is not stored in the cache, then we have what is commonly known as a cache miss, in which case Varnish will go and fetch the content from the web server, store it and deliver a copy to the user and retain it in cache to serve in response to future requests.

When a particular URL or a resource is cached by Varnish and stored in memory, it can be served directly from server RAM; it doesn't need to be computed every time. Varnish will start delivering a CACHE HIT in a matter of microseconds.

This means that neither our origin server or our web application, including its database, are touched by future requests. They won't even be aware of the requests loaded on cached URLs.

The origin server --- or servers, in case we use Varnish as a load balancer --- are configured to listen on some non-standard port, like 8888, and Varnish is made aware of their address and port.

Varnish Features

Varnish is threaded. It's been reported that Varnish was able to handle over 200,000 requests per second on a single instance. If properly configured, the only bottlenecks of your web app will be network throughput and the amount of RAM. (This shouldn't be an unreasonable requirement, because it just needs to keep computed web pages in memory, so for most websites, a couple of gigabytes should be sufficient.)

Varnish is extendable via VMODS. These are modules that can use standard C libraries and extend Varnish functionality. There are community-contributed VMODS listed here. They range from header manipulation to Lua scripting, throttling of requests, authentication, and so on.

Varnish has its own domain-specific language, VCL. VCL provides comprehensive configurability. With a full-page caching server like Varnish, there are a lot of intricacies that need to be solved.

When we cache a dynamic website with dozens or hundreds of pages and paths, with GET query parameters, we'll want to exclude some of them from cache, or set different cache-expiration rules. Sometimes we'll want to cache certain Ajax requests, or exclude them from the cache. This varies from project to project, and can't be tailored in advance.

Sometimes we'll want Varnish to decide what to do with the request depending on request headers. Sometimes we'll want to pass requests directly to the back end with a certain cookie set.

To quote the Varnish book,

VCL provides subroutines that allow you to affect the handling of any single request almost anywhere in the execution chain.

Purging the cache often needs to be done dynamically --- triggered by publishing articles or updating the website. Purging also needs to be done as atomically as possible --- meaning it should target the smallest possible scope, like a single resource or path.

This means that specific rules need to be defined, with their order of priority in mind. Some examples can be found in the Varnish book (which is available to read online or as a downloadable PDF).

Varnish has a set of tools for monitoring and administering the server:

  • There's varnishtop, which lets us monitor requested URLs and their frequency.

  • varnishncsa can be used to print the Varnish Shared memory Log (VSL): it dumps everything pointing to a certain domain and subdomains.

  • varnishhist reads the VSL and presents a live histogram showing the distribution of the last number of requests, giving an overview of server and back-end performance.

  • varnishtest is used to test VCL configuration files and develop VMODS.

  • varnishstat displays statistics about our varnishd instance:

  • varnishlog is used to get data about specific clients and requests.

Varnish Software offers a set of commercial, paid solutions either built on top of Varnish cache, or extending its usage and helping with monitoring and management: Varnish Api Engine, Varnish Extend, Akamai Connector for Varnish, Varnish Administration Console (VAC), and Varnish Custom Statistics (VCS).

The post How to Boost Your Server Performance with Varnish appeared first on SitePoint.

Tonino Jankov
Ellenőrizve
25 perc 35 másodperc ago
Learn CSS | HTML5 | JavaScript | Wordpress | Tutorials-Web Development | Reference | Books and More
Feliratkozás a következőre: Sitepoint PHP hírcsatorna