Category Archives: Web Development

Single Sign On with Iframes (eg Sharepoint)

user-experience

Single Sign On is a great tool for implementing seamless user experiences. In order to mix many third party tools together, the authentication puzzle quickly stacks up. Single Sign On via SAML to the rescue!

Implementing SSO via SAML is well documented on the internet, so I won’t go into it here. However, imagine a scenario where you want to implement SSO which integrates a third party application into your main app via an iframe. Iframes are notorious around the web yet are still used quite pervasively in places such as Sharepoint, and Outlook Web Apps.

The difficulty lies in the fact that Identity Providers do not permit their authentication screens to be embedded in iframe. And rightly so. Auth screens collect credentials and having an auth screen inside an iframe makes it real easy for a hacker to steal a user’s account information.

So, how do you work around this?

Popups. Yes, popups.

While working on this problem, I was extremely resistant to the idea of using popups as they also have a bad rap in the internet community because of how they are abused. However, after reading this article, I gave in to the fact that popups are a “necessary evil”.

Remember that the end goal here is to provide a seamless experience as much as possible. The central idea is that once a SAML flow is initiated in an iframe, a page is rendered that does the following:

1. Renders javascript to open a new window and send the user through the actual SSO/SAML flow.
2. Renders a page in the iframe that instructs the user that the popup may be squashed by the browser and to allow it, or to click a link that manually opens a new window to go through the SSO/SAML flow.
3. The page that is rendered in the iframe also continues to check(via js) whether or not a logged in session has been established. Once the user is logged in, the iframe page can be reloaded showing the logged in user content.

When a browser squashes a popup, it usually gives an indication it did so and provides an option to open it. However, this is not always obvious. This is why we give an explicit link to open SSO flow window.

This approach took about a half day to architect, and about a week of hair pulling trying to get SSO to work without popups. I recommend going the popup approach. In fact, it is probably the only way to get this to work.

Testing tips with Capybara on Rails

Downtime, layoffs, security breaches, and bugs are among the worst things that technology teams have to deal with. Surprisingly, bugs are the most manageable. To help make bugs more trivial, here are a few application testing techinques when working with Rails.

Any time you deploy new code, you need to have confidence the new code works as advertised and that old code hasn’t broken down. There’s no way to have 100% confidence, but having good code coverage is a start.

Rails provides many ways to test, but in my opinion the two most important ways are unit tests and integration tests. Unit tests examine your low-level model code, and makes sure core functionality doesn’t break. In any modern web app worth its salt, you will need full-stack integration tests. How do you know that an Ajax action actually replaced the element you thought it would?  Functional controller testing is decoupled, but assumes form parameters are sent through in a certain way, but what if there is a bug in your html? You need to be able to reproduce user behavior exactly as a user behaves, with no assumptions. This couples things, but that’s ok. Better to be coupled and catch a bug, than not catch it all.

We use capybara and capybara-webkit for headless browser testing. I have a love/hate relationship with capybara. On one hand it provides the best full stack testing interface for a Rails app I’ve ever seen and has saved me countless hours of debugging.  On the other hand, I”ve hit so many weird synchronization issues (due to the asynchronous nature of ajax) with capybara and capybara-webkit and spent countless hours pulling my hair out.  See this discussion on Capybara’s google group (https://groups.google.com/d/topic/ruby-capybara/sjgRGiNcL7g/discussion) and this one on capybara-webkit’s (https://groups.google.com/d/topic/capybara-webkit/i51R5I4sMCI/discussion)

They are not without faults, but the gains are so worth it when you see your test suite go green prior to a deployment.  Also, most of Capybara’s synchronization’s issues have been solved in their latest version, I think > 2.0.  Shout out to Jonas Niklas for his great work on this project.

I do have a small bone to pick though. Prior to v2 of Capybara, it had this nifty helper method called #wait_until which I used heavily to wait until an element appears.  In v2 this was removed, see: http://www.elabs.se/blog/53-why-wait_until-was-removed-from-capybara.  Jonas says that there shouldn’t be any synchronization issues any more and that if there are its really just a matter of how you are testing.  Hmm… test suites should get out of your way, no? Not tell you how to write your tests?  I don’t know, perhaps he’s right, but I didn’t look kindly on rewriting our massive test suite.  Luckily, Jonas is so awesome he provided this workaround that plugged and played right into our suite: https://gist.github.com/jnicklas/d8da686061f0a59ffdf7

def wait_until
  require “timeout”
  Timeout.timeout(Capybara.default_wait_time) do
    sleep(0.1) until value = yield
    value
  end
end
Awesome.  Also, here’s a nifty capybara pattern I just came up with and posted on my personal blog, http://p373.net/2013/02/22/capybara-custom-matcher-alternative/
If you want to add functionality to a Capybara::Session, or “page”, where you need to check multiple things, and want to do it in a DRY and reusable fashion, check this snippet:

#Let's say you want to check super awesomeness on the page
#which involves checking multiple things, like the current path,
#the page has some content, and a particular css selector is present
#Like so:
page.should be_super_awesome

#in spec_helper.rb
Capybara::Session.send(:include, SuperAwesomeHelper::Session)

#The the implementation:
module SuperAwesomeHelper
  module Session
    def super_awesome?
      errors = false
      errors ||= "Wrong path" unless current_path == super_awesome_path
      errors ||= "Missing content: You are awesome" unless !errors and has_content?("You are awesome")
      errors ||= "Missing selector: #awesome-div" unless !errors and has_selector?("#awesome-div")
      People.all.each{|p| errors ||= "missing person #{p.name}" unless !errors and has_selector?(".person-#{p.name}")}
      !errors or raise Capybara::ExpectationNotMet, errors
      return true
    end
  end
end

Check the p373 blog post for all the gritty details.

Using console.time to test strategies appending HTML while in loops: createElement, innerHTML, and jQuery append.

I have long heard that innerHTML is faster than creating elements, when you don’t need to store the DOM element in Javascript for further use. I had to do one such manipulation on Recognize’s pricing page.

Recognize’s pricing page copies text from the right features column for mobile considerations.

On page load our pricing page grabs the text of the features on the right column, and inserts the same text into the corresponding row. You can’t see it unless you resize your browser down to the size of a phone. We show the hidden copied text for mobile devices, because we can’t fit that right features column on 480px width devices, like the iPhone.

Recognize pricing page on an iPhone 4

Recognize pricing page on an iPhone 4

I want this to happen fast. Because I loop over the DOM, the data is stored in the existing HTML. This is expensive, but allows me to keep my data DRY, and allow for devices that don’t have JS to at least view the site. Not to mention that it is important for the features to be in the HTML for SEO.

I profile my functions with console.time and console.timeEnd. For instance:

  function funStuff() {
    console.time("funStuff finished");
    // Do something expensive
    console.timeEnd("funStuff finished");
  }

That will output a time in the console. You can learn more about console helpers at Firebug’s Console API docs.

Let’s get down to business with my code on the Pricing page. Here it is with my first attempt, using document.createElement to append the HTML:

  P.prototype.copyFeatureText = function() {
    var $features = $("#features li");

    $(".features").each(function(j, el) {
      var pricingFeature = this;
      $features.each(function(i, el) {
        var span = document.createElement("span"),
            listItem = pricingFeature.getElementsByTagName("li")[i],
            text = this.innerText;

        span.innerText = text;
        listItem.appendChild(span);
      });
    });
  };

I’m doing a lot here. I’m looping over each feature’s list (Startup Package and Business Package), and for each line I create a new span and give it the same text as the corresponding feature’s list. It looks slow.

I run this 10 times to have a decent sample size.

The results in milliseconds:
43.698
44.765
45.872
45.381
45.101
45.932
57.034
37.946
36.279
44.404

Using document.createElement, this operation took on average 44.641ms.

Okay, I rewrote copyFeatureText function to not store so many variables and to use innerHTML instead of createElement.

  P.prototype.copyFeatureText = function() {
    var $features = $("#features li");

    $(".features").each(function(j, el) {
      var pricingFeature = this;
      $features.each(function(i, el) {
        pricingFeature.getElementsByTagName("li")[i].innerHTML += ""+this.innerText+"";
      });
    });
  };

I run the profiling again. Here are the results in milliseconds:

56.768
51.154
56.942
45.673
49.746
38.897
45.057
48.204
53.077
51.538

With string concatenation and innerHTML, it takes slightly longer at 49.706ms.

The results show that for this specific operation, it takes around 5 milliseconds longer with innerHTML. Somewhat counter intuitive on first impression due to the decreased lines of code. But knowing more about what is happening under the hood with +=, we find that += can be a slow.

Clearly, appending elements on each list item slows down the operation. It seems for my uses it is necessary, unless I was to strip out the entire DOM list item nodes and replaced them with a documentFragment. A strategy I believe to be overkill.

To avoid += string concatenation, many suggest using Array.join(). Modern browsers, like Chrome, don’t show any difference in performance. But let’s test anyway using this strategy, plus using jQuery to append a string.

  P.prototype.copyFeatureText = function() {
    var $features = $("#features li");
    
    $(".features").each(function(j, el) {
      var pricingFeature = this;
      $features.each(function(i, el) {
        var listItem = pricingFeature.getElementsByTagName("li")[i],
            span = [];
            
        span[0] = "";
        span[1] = this.innerText;
        span[2] = "";
            
        $(listItem).append(span.join(""));
      });
    });
  };

Here’s the results:
47.092
49.399
41.382
43.525
63.440
53.579
62.801
58.524
56.857
60.428

Appending a string via jQuery takes on average 53.703ms.

That is slower. Makes sense considering it uses jQuery. jQuery does quite a lot and can slow things down a bit at times.

I can see one way I can quickly make this last example more performant, only reset the second index of the span array.

  P.prototype.copyFeatureText = function() {
    var $features = $("#features li");
    var span = ["", null, ""];
    
    $(".features").each(function(j, el) {
      var pricingFeature = this;
      $features.each(function(i, el) {
        var listItem = pricingFeature.getElementsByTagName("li")[i];

        span[1] = this.innerText;
            
        $(listItem).append(span.join(""));
      });
    });
  };

The results are better:
53.848
36.993
57.184
40.089
54.411
52.891
59.836
48.242
49.737
49.209

Only creating an array once, and only changing the text array index in the loop takes 50.244.

Okay so slightly faster.

Looking at the results, it looks like for now I’ll keep doing exactly what I was doing with document.createElement even if the difference is only a few milliseconds in Chrome.