Some Facebook Javascript Bookmarklets

September 10, 2014

Bookmarklets are pieces of JavaScript code that are stored as bookmarks in your browser, and execute locally (i.e. inside the downloaded page) as the bookmark is clicked.

Using the Facebook Graph API, you can take a look “behind the scenes” to retrieve the raw information of what is being displayed when you browse Facebook.

Requests under the URL http://graph.facebook.com/ return JSONified data about the requested object, which is identified by it’s ID.

Posts and Threads

Let’s look at threads (posts) and their IDs. There are a couple of ways the thread ID is stored in the thread URL, depending on where it is posted (page, group) and how you browse it (permalink or notification):

https://www.facebook.com/[name]/posts/[thread id]
https://www.facebook.com/groups/[group id]/permalink/[thread id]/
https://www.facebook.com/photo.php?fbid=[thread id]&set=[...]

Since the thread ID is numeric, a simple regex \d+ would be sufficient to retrieve it. However, group IDs may also be numeric, and names may contain digits.

After a bit of experimenting, the regex that I came up with to extract the thread ID from a Facebook URL, is

(/[\/=](\d+)[&\/]?/.exec(window.location.toString().replace(/\/groups\/\d+\//),""))[1]

Using this regex, we can now craft a JavaScript routine to open a new window containing the Graph API result:

window.open("http://graph.facebook.com/" +
    (/[\/=](\d+)[&\/]?/.exec(window.location.toString().replace(/\/groups\/\d+\//),""))[1] +
    "/comments", "_blank")

and create a bookmark for it. Since this WordPress installation does not allow to include bookmarklet links, you need to

  • Create a bookmark in your browser
  • Give it a name, such as “View Comments in FB Graph”
  • Set the URL or location to
javascript:window.open("http://graph.facebook.com/" + (/[\/=](\d+)[&\/]?/.exec(window.location.toString().replace(/\/groups\/\d+\//),""))[1] + "/comments", "_blank")
  • Click OK

But this does not give you the contents of the whole thread, just the comments.

To retrieve the whole post, we can use the Graph API Explorer.

The Graph API Explorer retrieves the details of a Facebook object, such as a post or thread, using the URL

https://developers.facebook.com/tools/explorer/?method=GET&path=[object id]

So, as we know how to extract the thread ID from a FB URL, let’s create a bookmarklet with the URL

javascript:window.open("https://developers.facebook.com/tools/explorer/?method=GET&path="+ (/[\/=](\d+)[&\/]?/.exec(window.location.toString().replace(/\/groups\/\d+\//,"")))[1], "_blank")

This opens the Graph Explorer with the desired ID. Click Submit to retrieve the data. Probably you need to click Get Access Token first.

Remove the Right Column

If you want to take screenshots of Facebook pages, you probably want to remove the right column before screenshotting, since it only expands the image, but does not include the content you want to save.

The top-most HTML container for right column content is called “rightCol” (yes, surprising).

To remove it from display, simply add this code to a bookmarklet:

javascript:var rc=document.getElementById("rightCol");rc.parentElement.removeChild(rc);

Clean up the Likes Page

To get a screenshot of “selected” Likes on a Like Page, there is a way to delete the Likes we don’t like (haha).

Simply scroll down until the list of likes is complete, then run this bookmarklet:

javascript:var li=document.getElementsByClassName("_5rz");for(var i=0;i<li.length;i++){var l=li[i];l.onclick=(function(el){return function() { el.parentElement.removeChild(el);return false;};})(l);}

Now clicking on a Like preview image will remove the entry from the list, allowing you to retain only the desired entries, ready to screenshot.


Capturing Awesomium Requests and Responses

September 2, 2014

The Awesomium browser controls provides a set of events that the hosting application can handle, but there is (currently, as of 1.7.4.2) there is no way to access the underlying HTTP requests and responses.

There are a couple of questions on SO that try to solve this problem, such as

which all hint to use the FiddlerCore library to make the application hosting the browser control also act as proxy for this control.

By using the proxy functionality of Fiddler and handling its events it is possible to access the HTTP request and response, and, for example, log data retrieved in AJAX calls. This answer on SO shows how Awesomium’s proxy settings are defined.

Since not all code samples on SO work with the current Awesomium version 1.7.4.2, here my solution based on the articles above.

In the App class (App.xml.cs), FiddlerCode must be initialized.

  public partial class App : Application
  {
    log4net.ILog logger;

    protected override void OnStartup(StartupEventArgs e)
    {
      logger = log4net.LogManager.GetLogger(typeof(App));
      SetupInternalProxy();
      base.OnStartup(e);
    }

    private void SetupInternalProxy()
    {
      FiddlerApplication.AfterSessionComplete += 
        FiddlerApplication_AfterSessionComplete;
      FiddlerApplication.Log.OnLogString += 
        (o, args) => logger.Warn(args.LogString);

      FiddlerCoreStartupFlags oFCSF = FiddlerCoreStartupFlags.Default;
      //this line is important as it will avoid changing the proxy for the whole system.
      oFCSF = (oFCSF & ~FiddlerCoreStartupFlags.RegisterAsSystemProxy);

      FiddlerApplication.Startup(
        0,
        oFCSF
        );
    }

This is also the location to analyze requests and responses. In the sample, jQuery JSON-responses are deserialized into C# objects

    private void FiddlerApplication_AfterSessionComplete(Session oSession)
    {
      var resp = oSession.oResponse;
      var ct = resp.headers["Content-Type"];

      if (ct.Contains(';'))
        ct = ct.Split(";".ToCharArray())[0];

      var req = oSession.oRequest;
      
      switch (ct)
      {
        case "text/html":
          // do something
          break;

        case "application/json":
          var json = oSession.GetResponseBodyAsString();
          if (json.StartsWith("jQuery"))
          {
            json = json.Substring(json.IndexOf('(') + 1);
            json = json.Substring(0, json.Length - 2);

            // deserialize C# object
            var content = JsonConvert.DeserializeObject(json);

            if (content != null)
            {
              var uri = new Uri(oSession.fullUrl);
              var q = HttpUtility.ParseQueryString(uri.Query);
              // parse query string
            }
          }
          break;

        default:
          // 
          break;
      }
    }

In the window hosting the browser control, we need to set the browser control’s proxy settings in the constructor:

public MyWindow()
{
  InitializeComponent();

  var pref = new awe.WebPreferences
  {
    ProxyConfig = "http://127.0.0.1:" + 
      FiddlerApplication.oProxy.ListenPort.ToString(),
    Plugins = false,
  };
  this.webControl.WebSession = awe.WebCore.CreateWebSession(pref);
}

That’s it ;)


Browser Screenshot Extensions

September 2, 2014

If you want to take a screenshot of your current browser window, there’s always good old ALT-Printscreen, but this function captures the whole window, not just the contents, and copies it to the clipboard. Then you still need to open a graphics editor, such as Paint.Net, to crop, edit, and save the image.

There are, however, a couple of browser extensions to simplify the process, and support capturing the complete page contents, rather than just the visible part of the page.

Here’s the list of extensions I use:

Firefox

In Firefox, I use Screengrab (fix version). It allows you to save or copy-to-clipboard the complete page, the visible part, or a selected area of the current page.

In the settings, you can define the pattern of the file name of the saved image (default: HTML Title and timestamp), and the text that is generated at the top of the image (default: URL). The option “Quickly save” won’t prompt you for a file name.

I love this extension for Firefox – however, if the screenshot gets too big (about 1.5Mb on Win32, 3Mb on Win32), it silently fails and generates .png files of size 0).

Chrome

The extension Screen Capture (by Google) is now unsupported, and it did not work (read: the menu buttons did not invoke any recognizable action) on the latest versions of Chrome.

The extension Awesome Screenshot: Capture & Annotate supports capturing the complete page, the visible and a selected part of the page. After capturing, a simply picture editor allows you to crop the picture, or add simple graphics and text to the image. The file name of the saved image defaults to the page’s Title, but can be edited in the Save As dialog.

Unfortunately, only the command “Capture visible part of page” works on Facebook pages – both “entire page” and “selected area” fail to capture.

Finally, the extension Full Page Screen Capture simply generates an image of the complete page, and displays it in a new tab. From there, you need to invoke Save (ctrl-S) to save the image to the default directory. File name pattern is “screencapture-” plus the current URL. This extension provides no options.


Feature Request

July 28, 2014

You know that your product is missing a critical feature, if a quick search (case in point: “Firefox search bookmark folder name”) brings up forum entries dating back at least 5 years:

"firefox search bookmark folder name"

“firefox search bookmark folder name”


Calling xsd.exe in VS 2013 Build Event

July 24, 2014

While working on an XML project, I wanted to call xsd.exe on an .xsd file during the build process, and found this solution on SO, which works for VS 2010.

For VS 2013, the solution did not work anymore, especially on systems that had no prior version of VS installed, since xsd.exe hides in a different location.

A comment to the answer illustrated how to query the registry correctly on x64 systems.

So my modified pre-build event looks like this:

call "$(ProjectDir)GenerateFromVSPrompt.cmd"
  "$(ProjectDir)"
  "$([MSBuild]::GetRegistryValueFromView(
    'HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Microsoft SDKs\Windows\v8.1A',
    'InstallationFolder', null, RegistryView.Registry64, RegistryView.Registry32)
    )bin\NETFX 4.5.1 Tools\xsd.exe"

all in 1 line.

If you use TFS as source control, you know that generated files need to be checked out before they can be overwritten.

I already wrote about TFS and code generation, and used the vcvarsall.bat then.

However, since we just need the path to tf.exe, and use the same VS version, we can just open a VS Command Prompt, run

where tf

and get the answer

C:\Program Files (x86)\Microsoft Visual Studio 12.0\Common7\IDE\TF.exe

for VS 2013.

So our batch file GenerateFromVSPrompt.cmd looks like this:

set tf="C:\Program Files (x86)\Microsoft Visual Studio 12.0\Common7\IDE\TF.exe"
%tf% checkout %1MyXsdClasses.cs
call %1XSDBuilder.cmd %1 %2
%tf% checkin /comment:"build event" /noprompt %1MyXsdClasses.cs
exit 0

In case tf.exe cannot check in the file because it did not change during code generation, it will return with exit code 1 which in turn will cause the build process to issue a build error and break. So we use exit 0 to clear the error condition

Finally, my version of XSDBuilder.cmd is based on an SO answer, but stripped down to only what is necessary, since I only have 2 XSD files, AND they need to be processed together:

pushd %1
%2 MyXsd1.xsd MyXsd2.xsd /c /n:My.Project.Xsd
popd

and, as I write, I really should merge both .cmd files into one … ;)

The build event is now executed correctly from VS, the VS Command Prompt, and on the build server.


Diamonds and Alcohol

June 17, 2014

“Your stats are booming!”, sayeth Teh WordPress.

But really, 160 spam comments about (mostly) diamonds and alcoholism within 24 hours is not what I expected.


Stumbling Upon the “Not Pre-Compiled” Error Message

May 12, 2014

I maintain an ASP.Net application, and I recently had to add a couple of new features. Development started with .Net 1.1, then 2.0, then 4.0. Since we were already on .Net 4, the new features are implemented in MVC3.

Everything worked fine until I wanted to deploy the ASP.Net-plus-MVC application.

I hit Publish, zipped the result, and unzip everything into its usual directory.

But…

The file ‘/somepage.aspx’ has not been pre-compiled, and cannot be requested.

or

Die Datei /somepage.aspx wurde nicht vorkompiliert und kann nicht angefordert werden.

I asked the internetz, and they suggested checking referenced assemblies. So I installed the MVC3 setup (on IIS8), just in case. But that did not resolve my problem.

While browsing thru tons of useless (and sometimes wrong) tips and tricks, I found this answer on SO:

I got this error when I upgraded a site from 2.0 to 4.0. The error was caused by a file PrecompiledApp.config in the site’s root directory. Once I deleted that file, the site started working.

Then it struck me: I had deployed the previous version of the web application using my build process which also compiles (and merges) .aspx files.

I have not yet found the time to adapt this build process to also support MVC projects (or test it whether it already does!), so I deployed what Publish produces. And this is not pre-compiled pages.

So when I overwrote the old version with the new version, the file PrecompiledApp.config remained.

After deleting the file, the web application started up again.


Follow

Get every new post delivered to your Inbox.

Join 66 other followers