Foswiki-2.1.9 is now available for download! We are delighted to announce the new release, which includes 57 significant bug fixes compared to the previous 2.1.8 version. This update addresses a range of important issues and enhances the overall stability and performance.
See the release notes for additional information.This release focuses on performance optimization, building upon Foswiki's historical struggles with disk I/O bottlenecks. Traditionally, for each request, the engine would access multiple files to compile the necessary objects required to render a wiki page. This was particularly true for preference settings and access control rights, which often necessitated loading complex combinations of objects. To address this issue, extensive profiling has been conducted to identify areas where time can be shaved off. Notably, our analysis reveals that we have a performance window between 300ms and 1 second in which to make further optimizations, allowing us to fine-tune the engine's efficiency.
See Item15206 for more details.In large wikis renaming/deleting webs takes forever with the web-server's connection to the Foswiki backend timing out. This was caused by the system iterating over all of the wiki for all of the topics contained in the web to be renamed. In most cases this type of rewriting links isn't required or even desirable.
See Item15333 and Item15220 for more details.A significant contributor to slow page loading times is the sheer volume of assets being loaded. Each wiki page may require a unique set of CSS and JavaScript files, which can be substantial in number. However, there are common core functionalities that need to be loaded on every page, while additional assets might be required only for specific cases. To address this, Foswiki now allows for the predefinition of this core set of assets, which can then be compiled into a single combined chunk. This approach reduces the number of parallel HTTP requests needed to load a page, resulting in improved performance.
See Item15229 for more details.Tags: features, performance
]]>We are very pleased to announce the availability of Foswiki 2.1.8. This release contains 61 fixes relative to 2.1.7, including 9 critical security related fixes.
Most notable are:
But also:
Tags: security
]]>No, it was much simpler than that but the nature of Docker images means its a little different.
In order for a docker instance to host something like Foswiki it needs persistent volumes. That is a place to hold the changing content. In timelegge/docker-foswiki the provided docker compose files esentially put the entire Foswiki website on a persistent volume. The way Foswiki automatically inter-mingles some of the data and code (or more properly automatically generates content sections under data) makes it difficult to do anything else.
So, simply pulling the latest Docker image for timlegge/docker-foswiki means you get all the updated alpine linux packages but Foswiki remains at the same version.
docker exec -it docker-foswiki /bin/bash cd /var/www/foswiki wget https://github.com/foswiki/distro/releases/download/FoswikiRelease02x01x07/Foswiki-upgrade-2.1.7.tgz tar --strip-components=1 -zxf Foswiki-upgrade-2.1.7.tgz cd tools ./configure --save rm Foswiki-upgrade-2.1.7.tgz exit
Then you simply need to restart your docker image to cache the updated Perl Code
wget https://github.com/foswiki/distro/releases/download/FoswikiRelease02x01x07/Foswiki-upgrade-2.1.7.tgz cp Foswiki-upgrade-2.1.7.tgz /var/lib/docker/volumes/foswiki_foswiki_www/_data docker exec -it docker-foswiki /bin/bash cd /var/www/foswiki tar --strip-components=1 -zxf Foswiki-upgrade-2.1.7.tgz cd tools ./configure --save rm Foswiki-upgrade-2.1.7.tgz exit
Tags: docker, installation, migration, update
]]>Tags: security
]]>GitHub
userid and password. The scenario builds 5 different environments and measures their performance. The environments are:
Environment | Supports |
---|---|
CGI | The out of the box Foswiki install with the standard CGI configuration file. This illustrates the standard performance. |
CGI-gz | The CGI environment, with support for the delivery of the pre-zipped js and css files. This will reduce the volume of data to be transported to render the page |
PageOpt | The CGI environment with the PageOptimizerPlugin configured. This will reduce the number of files to be requested to render the page. One js and one css file |
CGI-deflate | The CGI environment with the Apache mod_deflate enabled. This will reduce the volume of data to be transported to render the page |
FCGI | The foswiki install with the Fast CGI configuration file and Apache mod_fcgid installed |
In the first instance I measured page download times using a simple technique:
/usr/bin/time --quiet -f "%e" wget -pq --header="accept-encoding: gzip" --no-check-certificate --delete-after https://localhost/foswikiThis command will download all the components for the
Main.WebHome
page and print the time it took for the command to complete. My first results were:
Environment | from the localhost (ms) |
from my home server (ms) | ||||
---|---|---|---|---|---|---|
first | second | third | first | avg (4) | std dev | |
CGI | 600 | 570 | 630 | 1100 | 690 | 105 |
CGI-gz | 640 | 570 | 580 | 700 | 685 | 82 |
PageOpt | 630 | 640 | 600 | 700 | 700 | 105 |
CGI-deflate | 700 | 660 | 650 | 870 | 687 | 58 |
Fast CGI | 660 | 170 | 170 | 690 | 682 | 101 |
localhost
. The 1100ms on my home server is likely caused by a DNS lookup that is cached for the subsequent experiments. And the change in the Fast CGI measurement between the first and second sample is caused by the startup of foswiki.fcgi
, which is not repeated in the later retrievals.
But all other timings are around 650 +/- 200 ms with 95% confidence. Judging from the Fast CGI measurements on localhost
, the time is mostly taken up in the transport of the various elements and the server delivers them faster to the network than the network can deliver them to the client in all configurations.
To look at this further, I tried several tools that measure web page download speed. I used Dynomapper's blog Top 15 tools for measuring website or application speed as a guide an got some interesting insights.
Pingdom gave me the first confirmation that the different configurations were actually working.
Environment | Grade | Size (kB) | Time (s) | Requests (count) | Comment |
---|---|---|---|---|---|
CGI | C 74 | 242.9 | 3.67 | 36 | The baseline configuration |
CGI-gz | C 74 | 125.2 | 3.39 | 36 | Reduced download size for zipped .css and .js |
PageOpt | C 76 | 239.3 | 3.83 | 29 | Reduced requests due to PageOptimizerPlugin |
CGI-deflate | C 74 | 242.8 | 3.51 | 36 | No noticeable change. The Katacoda nginx proxy unzips the result. (Serverfault). Apache debug shows compression is done. Timing diagram shows that response from the server for jquery-2.2.4.js (85.6kB) takes 220ms ve 82ms in the CGI environment. Compressing on the fly creates overhead |
Fast CGI | C 74 | 241.6 | 3.76 | 36 | No noticeable change |
Component | Abbreviation |
---|---|
Time to send the request from the browser to the server | Tsend |
Time to retrieve the requested object from disk | Tget |
Time to process the requested object for transmission | Tprocess |
Time to return the requested object from the server to the browser | Treturn |
Tsend and Treturn are defined by the network properties,I will assume that they are constant across elements for a single measurement. Tget is a server property and will be relatively constant across elements (ignoring disk transfer time, which will depend of element size). Tprocess will be zero for the static elements.
This is probably a naive model, but it gave me the idea thatTprocess
should be zero for static elements (js, css, png, etc…). And Pingdom creates a JSON
file with all the measurements for down load, the HTTP archive file (HAR). I took measurements in three different environments. The results were revealing.
Main.WebHome
page. All but one are static elements. And there is one clear outlier: The html
whch has a wait time of almost 4 seconds! Tinkerbell is a 20 year old machine, with a very low power consumption that has been running in my basement non stop for all that time. It's not built for high powered processing and it illustrates the point that creating html needs cpu and memory. From the graph you can also see that the static elements wait varying times, but that large elements do not necessarily create a longer wait. The median wait time at Tinkerbell is 246 ms. And all elements are retrieved in approximately that time.
Main.Webhome
can be as short and 730-323=407 ms. The rest is starting perl
and getting Main/WebHome.txt
from disk.
perl
is a substantial overhead in retrieving the Main.WebHome
page, then the wait time can be reduced further by configuring Apache and Foswiki with FCGI. The results in my Katacoda Apache FCGI environment are shown in the graph on the right. The median processing time for the static elements in this sample is 408 ms, demonstrating how unreliable my arguments here are. But wait time for Main.WebHome
has disappeared in the noise! The actual number is: 229 ms.
So what does all this mean for the performance of Foswiki. FastCGI
. It will reduce the wait time for html
generation. In the Katacoda Apache FCGI environment it takes 500 ms to start, which are a consistent overhead on each transaction if you use regular CGI.
mod_deflate
is unlikely to make any improvement in the performance of your Foswiki site. The time taken to compress and decompress is added to the time it takes to retrieve the element from disk. It may help on a slow network, but you should check if it is worth it.
PageOptimizerPlugin
. This plugin collects all static css
and js
in one css
and one js
file and caches the result. From the server side perspective there are now two files to serve instead of say 10. But since the the retrieval is parallel, you are still stuck with whatever wait time is associated with the two files. At least 200 ms. Sites like Pingdom and many others recommend reducing the number of static elements per page to retrieve, because the browser needs the css
and js
before it can render the page. So combining the elements together may improve the user experience.
Tags: development, installation, performance
]]>