Set age restrictions on Facebook apps with the Graph API

This one drove me nuts. Facebook allows developers to restrict applications based on a user’s age and location. This was a pretty important requirement for a recent alcohol brand application we were developing. The api method is called admin.setRestrictionInfo. It makes what would otherwise be a tedious process very simple. No ‘enter your birthdate’ screens, no worrying about posts created by the application soliciting underage users. The method even takes a special parameter called ‘type’ that can be set to ‘alcohol’ (currently the only option for this parameter) which blocks users based on their local drinking age.

Sounds perfect. Until you try implementing it.

The abysmal Facebook documentation provides no examples of how to make this call. In scouring the forums, I discovered that the method only needs to be called once, and this particular application made no use of server-side SDKs, so there wasn’t a convenient way to make the call. I started making calls to the API via simple browser get requests, knowing that the response would be ‘true’ if the call was successful (that, believe it or not, was actually specified in the documentation). I kept getting a variety of error responses. The structure of the request is:

1
https://api.facebook.com/method/admin.setRestrictionInfo?access_token=[YOUR_ACCESS_TOKEN]&format=json&restriction_str={%22type%22:%22alcohol%22}

I was getting the access_token from the “Access Token” string found on the application page in the developer dashboard. But for whatever reason, this method requires you to pass your application id pipe-delimited with your application secret. This is the url structure that ultimately returned ‘true’ when entered into the browser.

1
https://api.facebook.com/method/admin.setRestrictionInfo?access_token=[APP_ID]|[APP_SECRET]&format=json&restriction_str={%22type%22:%22alcohol%22}

A very useful method with absolutely useless documentation. Hope this helps save someone else as much time as I wasted figuring this out…

Configuring GoDaddy SSL Certificates on Nginx

There are a number of articles out there on how to install security certificates from GoDaddy on an Nginx server, but the GoDaddy process seems to have changed since those articles were published, so I thought I would jot down the process I had to follow in the hopes that someone else finds it useful.

First, it’s worth mentioning that GoDaddy has pretty cheap SSL certificates. They aren’t the greatest, they just verify the domain, but they are pretty useful for Facebook apps, which is especially important given Facebook’s recent drive to have more users access Facebook via https. This article was immensely helpful in getting me started in the right direction, but it left out 2 important steps that kept the certificate from being properly recognized.

You’ll need to break from the instructions in that article after step 2.3. When you select ‘other’ from the webserver dropdown on GoDaddy, your zip will not include the intermediary cert (whether it’s a Starfield or GoDaddy cert). You’ll need this intermediary cert, so head on over to GoDaddy’s cert repo and download either the gd_intermediate.crt (or sf_intermediate.crt if it’s a Starfield cert) and upload to your server.

Instead of running

1
cat www.mysite.com.crt gd_bundle.crt > mysite_combined.crt

Run the following (replacing your domain name and intermediate cert name as necessary):

1
cat www.mysite.com.crt gd_intermediate.crt gd_bundle.crt > mysite_combined.crt

From there, the previously cited article should get you the rest of the way… BUT, there’s one more issue. When you try restarting Nginx, you’ll likely see an error. The intermediary cert is missing a line break, so you’ll need to open up the combined cert and change this:

1
-----END CERTIFICATE----------BEGIN CERTIFICATE-----

To this:

1
2
-----END CERTIFICATE-----
-----BEGIN CERTIFICATE-----

Placing “moov atom” at the beginning of an MPEG-4 video with FFMpeg

According to Atomic Parsley’s article on media meta data, MPEG-4 files consist of atoms (or boxes). One of these important atoms is called the moov atom, and this moov atom contains mdat data which is important in communicating seek points. So long story short, after encoding a number of MP4 files I noticed that the entire video had to download before playing via Safari. The issues was that ffmpeg was placing the moov atom at the end of the mp4 file, so Safari could not determine seek points or other important pieces of information to allow the video to play before the entire file had downloaded (including the moov atom).

Fortunately ffmpeg is now packaged with a solution; qt-quickstart. I followed these steps to get qt-quickstart installed and integrated into my encoding process for mp4 files, and now videos play and can be ‘seek-ed’ before they have fully downloaded. Note, this assumes a fairly recent build of ffmpeg.

First, find your ffmpeg directory (I couldn’t remember where I installed it, so I just did a quick search for an ffmpeg directory):

1
$ find / -name 'ffmpeg' -type d

Next, install qt-quickstart by browsing to the ffmpeg folder returned above and running the following commands:

1
2
3
$ cd /usr/local/src/ffmpeg/
$ sudo make tools/qt-faststart
$ sudo checkinstall --pkgname=qt-faststart --pkgversion="$(date +%Y%m%d%H%M)-svn" --backup=no --deldoc=yes --fstrans=no --default install -D -m755 tools/qt-faststart /usr/local/bin/qt-faststart

Now it’s as simple as calling qt-quickstart via PHP; one thing to note is that qt-quickstart will generate a new file, so you may want to delete your old file (in this case, $target) after you run the qt-quickstart command.

1
2
$moov_atom = 'sudo qt-faststart "' . $target . '" "' . $destination . '"';
exec($moov_atom);

Set far-future expires headers with Rackspace Cloud Files (sort of)

To keep parity with existing relationships, a client has required that we work with Cloud Files instead of Amazon’s CloudFront. There seems to be quite a bit of confusion about which offering actually performs better, but from my perspective, the Amazon API is much more intuitive, and therefore easier to work with. That said, there seems to be a lot of misinformation out there about Cloud Files; after spending a day with the API, I am pleased to conclude that the API is not as bad as I had initially thought. Rackspace could likely benefit from a little platform evangelism…

The biggest issue I’ve seen reported from a number of sources is the 72 hour TTL limit via the Rackspace control panel. Cloud Files uses TTL as an expires header of sorts. So I jumped on support chat, and quickly learned that the 72 hour TTL is only a limitation of the control panel, and a far-future TTL can easily be set via the API.

For this particular project, I’m using CakePHP, so don’t get too caught up on that App::import static method. Assuming cloudfiles.php, cloudfiles_exceptions.php, and cloudfiles_http.php are all in a cloudfiles folder in the vendors directory…

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
function publish(){
    //load up the cloudfiles class
    App::import('Vendor', 'cloudfiles/cloudfiles');

    //create a connection with your Cloud Files username and API secret
    $auth = new CF_Authentication('USER_NAME', 'API_SECRET');
    $auth->authenticate();
    $conn = new CF_Connection($auth);

    //grab an instance of our container
    $container = $conn->get_container('CONTAINER_NAME');

    //get the container's current TTL (assuming this was an existing container)
    debug( $container->cdn_ttl );

    //now just call the make_public method with a TTL parameter
    //here I'm just setting it to 30 days
    $container->make_public(86400 * 30);

    //confirm the TTL was properly updated
    debug( $container->cdn_ttl );
    exit();
}

The script outputs TTL values of 259200 and 2592000 respectively; the TTL on the container is now 30 days instead of 3. I have not noticed the updated TTL on the Rackspace control panel yet, but I’ll update this post if it is accurate once the existing 72 hour TTL expires.

Speed up image extraction with ffmpeg

A product requirement for a media management application I am building is to automatically extract a user-defined number of images from an uploaded video file. The user can request for up to 20 images to be extracted from a given video. My initial implementation was taking almost 5 minutes to complete a 20 image extraction from a 7 minute video.

1
2
3
4
5
6
7
8
9
10
11
12
13
private function generateThumbnails( $video, $destination, $duration, $thumbnail_count ){

    $duration = (int)$duration;
    $interval = floor( $duration / $thumbnail_count );

    $c = 1;
    while( $c <= $thumbnail_count ){
        $offset = $interval * $c;
        exec("ffmpeg -i \"{$video}\" -ss " . $offset . " -y -vcodec mjpeg -vframes 1 -an -f rawvideo -s 720x404 " . $destination . "thumb_" . $c . ".jpg");
        $c++;
    }

}

Turns out ffmpeg will encode the video to the -ss parameter if you supply the -i parameter before the -ss parameter. So the simple fix was to call -ss before -i. The 20 image extraction now takes less than a second.

1
2
3
4
5
6
7
8
9
10
11
12
13
private function generateThumbnails( $video, $destination, $duration, $thumbnail_count ){

    $duration = (int)$duration;
    $interval = floor( $duration / $thumbnail_count );

    $c = 1;
    while( $c <= $thumbnail_count ){
        $offset = $interval * $c;
        exec("ffmpeg -ss " . $offset . " -i \"{$video}\" -y -vcodec mjpeg -vframes 1 -an -f rawvideo -s 720x404 " . $destination . "thumb_" . $c . ".jpg");
        $c++;
    }

}

Save page output as HTML with CakePHP

I’m currently working on a project that requires storing dynamically generated HTML files on a CDN. Without getting too far off topic, I need to dynamically create an HTML page with database data and then store that HTML file on a CDN (Amazon S3) which cannot parse PHP. Even if it could parse PHP, these files could conceivably be hit very many times and this solution cannot be load dependent.

I was looking for a way to output a view, and save it as an HTML file which could then be uploaded via the S3 API. In the CakePHP API, we see the Controller render method returns the output of the View model’s render method which returns a string of the HTML output to be loaded in the browser. But we don’t want to call the Controller’s render method as that will send the user’s browser to that page; instead we want to use the View’s render method so we get the string value that can be saved into an HTML file.

Assume we have a publish controller method which is called when the user wants to publish the HTML file. We also have a private method to generate the HTML and save it as a file in the local filesystem.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
<?php
class VideosController extends AppController {

    protected function publish( $id ){
        $html_file = $this->generate( (int)$id );
        //do whatever needs to be done with the HTML file
        //for example, this is where I upload the file to S3
    }

    private function generate( $id ){
        //set whatever data is required to build the page;
        //be sure this is done before instantiating the View class so all
        //set variables are passed when you pass this controller object in the constructor
        $video = $this->Video->read( null, $id );
        $this->set('video', $video);

        //instantiate a new View class from the controller
        $view = new View($this);

        //call the View object's render method which will return the HTML.
        //Note, I'm setting the layout to HTML and telling the view to render the html.ctp file in /app/views/videos/ directory
        $viewdata = $view->render(null,'html','html');

        //set the file name to save the View's output
        $path = WWW_ROOT . 'files/html/' . $id . '.html';
        $file = new File($path, true);

        //write the content to the file
        $file->write( $viewdata );

        //return the path
        return $path;

    }
}

Haven’t thoroughly tested this yet, so let me know if you run into any problems. But so far, seems to be working well. Note, this isn’t really production ready as the generate method assumes success.

Setting layout in error pages based on Auth status with CakePHP

CakePHP error pages load within the default layout. This works most of the time, but for some applications, I have a different layout file for logged in users. For example, the navigation changes when a user is logged in. Normally including the proper elements based on the user’s login status would be sufficient, but for a recent project, the entire layout changes based on the user status. Therefore I needed to find a way to be sure the proper layout was loaded when 404 errors appeared.

First thing to do is create an app_error.php file in your /app directory. Your AppError class should extend the ErrorHandler class. Now extend the error404 method. You’ll have a reference to the controller via $this->controller so that you can access the Auth component. So just see if we have a valid logged in user, and if not, set the layout to ‘guest’, or whatever your layout happens to be named.

Be sure to call the parent method, passing in the $params variable to be sure the error is handled properly by the ErrorHandler’s error404 method.

1
2
3
4
5
6
7
8
9
10
11
<?php
class AppError extends ErrorHandler {

    function error404($params) {
            if( !$this->controller->Auth->User() ){
                $this->controller->layout = "guest";
            }
            parent::error404($params);
    }

}

Remove SVN folders from projects in Windows

This is a huge time-saver — big thanks to Hacktrix.com for this one!

I keep version-controlled directories of code snippets, plugins, components, classes, etc.  Problem is that when I go to paste in some of these snippets into projects (especially the more complex ones with deep folder structures), those SVN directories get copied over as well which can create all kinds of issues with the SVN repos on our SVN server.  Since I still work primarily from a PC, I wasn’t sure how to create a simple way to remove these directories.

Hacktrix.com has a great 2-step tutorial about adding a ‘delete SVN directories’ link to your right-click context menu in Windows.  They also have instructions for doing the same from OSX, but that was a bit less magical since it’s already easy enough to do that from the command line.

First, create a file with a .reg extension (cleanSVN.reg per the Hacktrix tutorial).  Paste in the following:

1
2
3
4
5
Windows Registry Editor Version 5.00
[HKEY_LOCAL_MACHINE\SOFTWARE\Classes\Folder\shell\DeleteSVN]
@="Delete SVN Folders"
[HKEY_LOCAL_MACHINE\SOFTWARE\Classes\Folder\shell\DeleteSVN\command]
@="cmd.exe /c \"TITLE Removing SVN Folders in %1 && COLOR 9A  && FOR /r \"%1\" %%f IN (.svn _svn) DO RD /s /q \"%%f\" \""

Double-click the saved cleanSVN.reg file, and now ‘delete SVN folders’ will appear as a context menu item!  So now whenever I paste a plugin directory into a new project, I can just right-click and remove the existing SVN directories.  No more commit errors, no more accidental over-writing!  Hacktrix seems to be a pretty good resource — check them out!

 

Password protect folders with NGINX

As a follow up to the earlier post regarding unexplained NGINX 404 errors as a condition of a poorly written location block, I thought it might be worth sharing another bad bit of code I have seen in a number of NGINX config files in the wild.  This topic is a bit more serious though as it involves password protecting folders rather than random 404s. It’s pretty common to have a location block that defines a webroot, an index, establishes password protection for the folder, and sets fastcgi params for dynamic page requests like so:

1
2
3
4
5
6
7
location = / {
    root   /var/www/nginx-default;
    index index.php;
    auth_basic "Restricted";
    auth_basic_user_file /etc/nginx/htpass;
}
include php.conf;

Note, the “include php.conf;” line above is just a quick way we keep our virtual host files clean. The php.conf file contains our location ~ \.php$ block.

The code above is not secure. This location block will only be applied to requests matching “/” exactly (notice the “=” sign). If you access a page or resource directly, you circumvent the authentication entirely! So a request to www.somedomain.com will prompt you for a password, but www.somedomain.com/index.php will let you access index.php.

After a quick jump over to the Nginx wiki we see that using “^~” will allow us to apply a case insensitive regular expression to our location block. This means we are now protecting all contents within the desired folder as well. The wiki also says that a match will result in immediate termination, so be sure to define an additional dynamic page block within your containing location block if necessary (like if you need to run php files within the protected folder for, say, PHPMyAdmin). Notice our included php.conf file has been added within the location block to account for this.

1
2
3
4
5
6
7
8
location ^~ / {
    root   /var/www/nginx-default;
    index index.php;
    auth_basic "Restricted";
    auth_basic_user_file /etc/nginx/htpass;
    include php.conf;
}
include php.conf;

I personally find the Nginx wiki very readable and very informative. It’s worth taking the 10 minutes to work out solutions from there rather than copy-pasting from blogs. There are some great convention and pitfall articles in there as well. Hope to get to a post outlining our configuration based on some time spent at the wiki.

NGINX 404 errors

Somewhere out in the wild there’s an NGINX config file that a lot of people are copying (ourselves included though we don’t remember where we found it!), and it’s likely causing some hard to track down 404 errors.

Fortunately we saw it crop up on one of our first NGINX dev boxes.

Within the location block, it’s pretty common to setup another location block that automatically disables logging and sets 30 day expires headers on static assets.  This is best handled like so:

1
2
3
4
location ~* .(jpg|jpeg|gif|css|png|js|ico|eot|svg|ttf|woff)$ {
    access_log        off;
    expires           30d;
}

The problem is that we originally had hastily grabbed this location block from a tutorial

1
2
3
4
location ~* ^.+(jpg|jpeg|gif|css|png|js|ico|eot|svg|ttf|woff)$ {
    access_log        off;
    expires           30d;
}

The difference is subtle, but the latter codeblock improperly matches for ANY url ending in those strings. We have a number of hashed URLs to validate certain requests, and since they are randomly generated, we occasionally saw 404 errors from these requests. It wasn’t until we noticed that one of the random strings ended in “js” that we thought to look at this block.