Should I be using WebP images?

24 November, 2020

You may have never encountered the .webp file extension before, but you probably have encountered a website that took too long to load, or loaded in the text first, and the images after a few seconds. This is due to images being one of the 'largest' parts of a website, in terms of file-size, and taking longer to download to your computer for viewing. A few years ago, Google purchased a company who's technology allowed them to develop a more compact file type that supports both lossless (like PNG) and lossy (like JPEG) compression: .wepb

WebP images are smaller than both PNG and JPEG files, while offering no visual quality difference.

I am a WebP Image
I am a JPEG

One of these images is an optimized .jpg, and one is a .webp. Can you tell which is which?
 

In the above example, the .jpg file, which was optimized with Adobe Photoshop as a JPEG at 30% quality is 85.4 kilobytes, while the .webp file, which was optimized with Google's libwebp at a quality of 50% is 63.3 kilobytes: A savings of 36%. The WebP image is on the left in case you were still wondering (keep your mouse still over the image).

.WebP has been out for a few years now, and as of writing this is supported in all browsers, except for Internet Explorer. While it may not be supported on older versions (and you should avoid using only .webp if legacy support is your goal), it will work on all recent versions of Firefox, Chrome, Safari, Opera, Edge, and their mobile counterparts. You may have encountered .webp images in your everyday browsing and not have known it:

  • Facebook started using .webp in 2014 and reported a 25-35% size savings for JPEG images, and 80% for PNG
  • Google has been using them since inception, and has a study reporting .webp is "25-34% smaller compared to JPEG file size at equivalent SSIM index"

These size savings result in faster page loads - which - result in a better user experience. If your site contains many images, and you are looking for an easy way to reduce the size of your site's page load, exploring webp may be a good place to start. If your site only contains a few images, and your goal is a faster page load - we would recommend looking at a waterfall chart and identifying the real culprit before converting your images to .webp.

Server Backups using Amazon's S3 Service

08 August, 2016

A good backup strategy is integral to any well-functioning business that maintains digital assets. If you haven't already, at some point you will be investigating off-site storage solutions and come across Amazon's Simple Storage Service, more commonly knows as 'S3'. While it may not suit every application, S3 provides a plethora of features out of the box, is amazingly easy to scale, and has a very affordable price-point - making it a perfect tool for our general backup purposes.

As with anything, there are multiple ways to go about creating the actual backup script, we're going to use PHP and the AWS PHP SDK for portability, transparency, and because I don't like to install a package if I don't need to; but if you are interested in doing this with s3cmd - a Linux program - for transferring data to S3, see LevelsIO tutorial here: https://levels.io/backup-linode-digital-ocean-vps-amazon-s3/.

Now - let's create a backup script. You will need to have an Amazon Web Services account for this, so if you don't have one, create one now. Once you have your AWS Account, log in to your management console, and select "Services > S3" from the top menu - this will bring up your S3 Dashboard. S3 uses 'buckets' to organize and store your files - just like a 'folder', except it's called a 'bucket'; Create a new bucket for our backups now by clicking the 'Create Bucket' button, giving the bucket a name, and selecting a region.

undefined
Creating a New S3 Bucket
undefined
Select 'Security Credentials' from the Account menu

Once you have the new bucket created, you will need to create a new 'user' which we will use to access this bucket via the AWS API. Access your AWS User settings by clicking 'Security Policies' in the Account menu found by hovering over your name. I am going to show you how to craete a user that can access all of your S3 Buckets, but you can use customized policies to limit access to only certain functions or buckets (to say, create a put-only user that cannot view or delete your 3 items for increased security).

Once on the security credentials page, select 'Users' from the left-hand menu, and click 'Create New Users'. This will bring up a form where you can enter some username, enter a descriptive name for your user, such as 'backup-user' and click 'Create' (Make sure that 'Generate an access key for each user' is checked). You will get a page confirming the creation of your new user, with a link to 'Show User Security Credentials; Click that link and copy down the Access and Secret Key you are shown - keep them safe, as this is essentially the login information for that user.

undefined
Creating a New IAM User
undefined
Copy down your new user's Access and Secret Keys

New users, by default, have access to nothing, so we'll need to create a Group that has access to our S3 bucket and add this user to it. Do this by clicking 'Groups' in the left-hand menu, and 'Create New Group'. Give your group a descriptive name, and click next; this page will allow you to select a policy for this group. Enter 'S3' in the search box, and select the 'AmazonS3FullAccess' policy - this will give this user full access to all of your S3 buckets, and access to all of the S3 API functions. The final step is to add the user we previously created to this new group.

undefined
Add the AmazonS3FullAccess Policy to your Group
undefined
Add your user to the new group you created

Once this is done - you can log out of Amazon; at this point you should have:

  • AWS S3 Bucket Name
  • AWS S3 Bucket Region (You can find that here)
  • AWS User Access Key
  • AWS User Secret Access Key

I have written this handy little script to get you started, but please feel free to customize it to your needs; the required AWS SDK for PHP is included in the package below:

 Download o2_s3_backup.zip

This script will:

  • Dump single, or multiple, database to the root of the backup location
  • Create a Tarball of the location specified for back inside the directory of the backup script
  • Transfer the Tarball to your S3 bucket using AWS Multipart Upload (to support files up to 5TB)
  • Remove the SQL Dumps, and Tarball after transfer is complete

Upload the included PHP file and AWS SDK to a location outside of your server's webroot; and configure the variables at the top of the script. These lines:

$BACKUPNAME = "[Name-of-your-backup]";
$BACKUPFILELOCATION = "[full/filepath/to/the/folder/to/be/archived]";
$FOLDERSTOEXCLUDE = array('[array],[of],[folders],[to],[exclude]');
$DBUSER = "[Your-database-username]";
$DBPASS = "[Your-database-password]";
$DBSTOBACKUP = array('[array],[of],[databases],[to],[dump]');
// AWS access info
if (!defined('awsAccessKey')) define('awsAccessKey', '[aws-access-key]');
if (!defined('awsSecretKey')) define('awsSecretKey', '[aws-secret-key]');
if (!defined('awsBucket')) define('awsBucket', '[aws-bucket-name]');
if (!defined('awsRegion')) define('awsBucket', '[aws-region]');

will configure the location of the folder to be backed up, allow you to export a database, exclude files from the backup, and specify your AWS S3 bucket and credentials. The method for dumping the database uses the password on the command line, for a more secure version (if your mysql.conf is setup correctly) you may un-comment line 57; I kept the less secure version in the demo for ease of setup.

You can extend and configure this script to back up multiple webroots, send to multiple S3 buckets, or remove backups on S3 after a certain time has expired. You can find information on the base code I have in the script, and other functions available to you here - http://docs.aws.amazon.com/aws-sdk-php/v3/api/.

I would also recommend attaching this script to a CRON job, and set it to run on a regular basis; as well as adding a simple mail script that alerts you when the script is complete, or fails to upload.

And as easy as that - you have an efficient off-site, backup on almost any Linux based server. Let me know your thoughts, and if you found this useful in the comments below...

Preventing XSS Attacks

10 July, 2016

Cross Site Scripting Attacks, abbreviated as XSS, is a variety of digital attacking where a user tricks a server into running some malicious JavaScript code when themselves, or another user, visits a page. This type of attack is very common, and typically occurs when a user has access to a text input on a web application.

XSS Attacks can be used for a variety of things; since the JavaScript executes in your browser, they have access to your local file system, and session information allowing for:

  • The installation of malware or adware on your browser or computer
  • Sniffing of your online session information, allowing them to impersonate you on that website
  • Performing some action on the website, which would seem as if you performed it (like changing your password)

These attacks are very powerful, however than can easily be thwarted. In its most simple form, you can mitigate leaving any XSS holes in your code by never allowing the user to enter data that is rendered as/in HTML. THIS INCLUDES URL PARAMETERS. I have seen quite few attacks happen because a URL parameter was not escaped before being output to the page. Just because the user isn't supposed to input something, doesn't mean they can't.

So - you find yourself staring at some code where the user's profile is output to the page. You test if you are open to an attack by entering "I really like <script>alert('BIG');</script> trucks" in the text area and hitting save. Boom, the word 'BIG' appears on screen in an alert when the page reloads, showing you your bio: "I really like  trucks". Time to close those XSS holes.

If there is no need for actual HTML tags to be present in your user's outputted data, pure PHP sanitation pre-output is the way to go:

htmlspecialchars($_GET['foo']), ENT_XHTML, 'UTF-8')

would convert all special characters that may allow the code to run, to harmless characters displayed on the screen. If you are dealing with text added to the page with JavaScript, utilize jQuery's text function, like so:

var safe = $('<span></span>').text(unsafe).html();

If you need the user's content to include actual HTML tags for formatting, and are not using a template engine, we recommend utilizing HTML Purifier to sanitize your code server-side before it is saved. Do not trust strip_tags, or a regex, as it is always possible to trick them with something like '<scr<script>ipt>'. If you are using a template engine, make sure that the saved data is outputted using some sanitation - TWIG does this by default, Smarty would utilize:

<p>{$foo|escape:'html','UTF-8'}</p>

Take care to review your code, and make sure you are not leaving your users, and your systems, vulnerable.

Home ← Older posts