A Picture is Worth a Thousand Words but the Beholder Chooses the Thousand Words

When I first started programming, I would design via drawings. Creating boxes to represent database tables or components or what have you. Lots of rectangles and lines. I remember it being meditative.
Yesterday I was tasked with designing a system for self-deposit of items to a digital archive. I was asked to deliver the diagram of interacting components. This caused me mental anguish because what ultimately needs to exist for this service is a complex collection remote services, local processes, system behavior, and data structures.

And it struck me hard in the face, I don’t want to design by drawings. I want to design by text. After all, most of my professional hours have been devoted to writing in code. But I don’t want to start coding.

But here I sit, thinking about what all I need to design. And it hits me: I need to define the end-user interaction with the system. So I’m writing down what the user sees on the first page after they login, the actions they can take to begin archiving their own material, or the material of their professor, or the material of the entity for which they are a delegate.

With this initial list for after they login, things will spread out to other lists. And slowly I can explore the system through words.

And then, we’ll break out the crayons and start drawing our interpretation of those words are.

Using Git Bisect for Finding When a Bug Was Introduced

Previously I wrote about updating a framework, automated tests, and included a brief mention of `git bisect`. I’d like to expand on the power of `git bisect` and your repository.

First a definition of the command:

git-bisect – Find by binary search the change that introduced a bug

The man page is quite helpful, but its application may not be immediately obvious. This has been my use case.

Step 1 – Prepare Your Bisect

$ git bisect start
$ git bisect bad
$ git bisect good 3f5ee0d32dd2a13c9274655de825d31c6a12313f

First, we tell git that we are starting the bisect. Then we indicate at what point we first noticed the bug – in this case HEAD. Finally we indicate at what point we were bug free – in this case at commit 3f5ee0d32dd2a13c9274655de825d31c6a12313f.

Step 2 – Start Your Engines

$ git bisect run ./path/to/test-script

We’ve told git-bisect which commits were good, and bad. Then with the above command, git will iteratively step through history and run the ./path/to/test-script. And what is ./path/to/test-script? It is any executable file that exits with a 0 status or not 0 status [learn more].

If the test-script exits with status 0, the current commit is considered good. Otherwise it is considered bad. Eventually git-bisect will converge on the commit that introduced the bad result and report the bad commit log entry.

Script Use Cases

So what is this ./path/to/test-script? Sometimes I’ve used `rake` for my Rails project. But that can be overkill. I’ve also ran bisect with one of the test files as the Good/Bad indicator (i.e. ruby ./test/unit/page_test.rb). In these cases, the tests were in my repository, which meant they were equally volatile.

I have also written a script that sat outside the repository I was bisecting. This was really powerful when my manager asked “When did this seemingly strange behavior get introduced?”

I wrote a Capybara test that automated the steps my manager reported for reproducing the error and an assertion of the expected behavior.

Sure enough, a few weeks prior, I had introduced the odd behavior as a side-effect. At the time I didn’t have test coverage for that particular behavior. I patched the error, answered my managers question, and had an automated test that I could drop into my repository to make sure I didn’t reintroduce that behavior.

As with most git utilities, I’m sure I’m only scratching the surface of git-bisect, but even with only using my above process, I’ve saved plenty of time and mental energy.

I have also created, long ago, a repository that highlights and automates several git commands. Follow the directions and it will walk you through a series of commits.

Hopping the Tracks

“It’s time to move on, time to get going
What lies ahead, I have no way of knowing
But under my feet, baby, the grass is growing
It’s time to move on, it’s time to get going”

“Time to Move On” – Tom Petty

Effective July 1st, 2012, I will be joining the Digital Library Services Department at the Hesburgh Library at the University of Notre Dame.

Over the past three years, I have assumed primary development and maintenance of Conductor and worked on several other projects – most notably the ND Campus Map and the OIT Site Redesign implementation.

I am stepping away from a team of remarkably creative, passionate, and professional friends.  During the past three years, I have had the wonderful privilege of watching a team grow and further develop into creative leaders on campus and beyond. We have all pushed ourselves to continually improve what it is we do.

I am stepping into a team that is growing and shifting focus as they work to guide Library Services into an emerging highly digitized and cross-referenced environment. I will be working at process improvement amongst the developers, mentoring, and contributing to Open Source projects (i.e. https://github.com/projecthydra/).  I’m certain there will be other interesting projects that I will work on as well.

I am excited about the opportunity to work day-to-day with several other developers…many of whom I’ve worked with before at my previous job – as of July 1 there will be 4 Lightsky alums working at the Hesburgh Library. In working with other developers, my goal is that we all grow our craft and become passionate and pragmatic programmers.

An interesting bit of trivia, the day that I started at AgencyND was they day that I was scheduled to have my first face-to-face interview at the Hesburgh Library – there were a lot of vacations and conferences that pushed my initial Library interview back.  Instead of taking a chance that I would get the Library position, I opted to take the position at AgencyND – a decision that was right for me at the time.  Instead, my former and soon to be again co-worker, Dan Brubaker-Horst got the position.

What I’m Reading…And Why

Books that I’ve Been Reading

  • “Clean Code: A Handbook of Agile Software Craftmanship” by Robert Martin
  • “The Clean Coder: A Code of Conduct for Professional Programmers” by Robert Martin
  • “Domain Driven Design: Tackling Complexity in the Heart of Software” by Eric Evans
  • “Working Effectively with Legacy Applications” by Michael Feathers
  • “Objects on Rails” by Avdi Grimm
  • “The Cucumber Book: Behaviour-Driven Development for Testers and Developers” by Matt Wynne and Aslak Hellesoy
  • “Crafting Rails Applications: Expert Practices for Everyday Rails Development” by Jose Valim
  • “Rails 3 in Action” by Ryan Bigg and Yehuda Katz

Much like a doctor practices medicine, I practice coding.  That isn’t to say I’m an amateur, as I take my profession very seriously.  I want to get better at.  The above books are a lot to ingest in a short period of time, but I find that reading multiple different sources at the same times helps to better sew the seeds.

This isn’t to say I’m just now picking up software books and reading them.  I’ve read plenty of other software related books prior to this recent “binge” – The Pragmatic Programmer, Refactoring, Smalltalk Best Practice Patterns, Test Driven Development, Implementation Patterns, The Rails Way, and several more.

In looking at my pattern of learning, I tend to go the path of assimilating lots of information in short bursts of time; Some times this is triggered by an emerging issue or problem-set, but more often as inspiration and guidance for improvement.

Keeping One Constraint Salient Instead of Ten

“While teaching programming, Matt observed that giving the children the single rule that they shouldn’t have methods longer than 10 lines resulted in the children writing code with SRP, encapsulation, and other positive attributes. Matt argued that beginning with a simple rule that could get you 80% of what you want is likely better than 10 rules that, if followed correctly, could get you 100% of what you want.” – Jay Fields http://blog.jayfields.com/2012/03/when-to-break-apart-your-application.html

The above concept, write software with one constraint, is excellent advice.  There is a tremendous amount of literature concerning best practices, methodologies, and principles; all of which can be challenging to internalize.

I can safely say that any methods I’ve ever worked with, or written, that were more than 10 lines were inevitably difficult to test.  Or, in the case of my pre-automated testing days – The Dark Days – those long methods were inevitably the cause of the most bugs.

In fact I would wager that the probability of bugs in a method is N log(N) where N is the number of lines in the method.  In other words, as the size of a method grows, the probability of bugs grows faster.  This is based on experience and conjecture, nothing more.

This single constraint is in tension with reading numerous books…after all how can I hope to possibly internalize all of the information I’ve been reading.

Abstraction

“The TDD community has been recently buzzing with the realization that code becomes more general as tests become more specific, revealing that test-driving code alone will push it to a more appropriate level of abstraction. It is still up to the human(s) at the keyboard to change the class and method names to match.” – Tim Ottinger and Jeff Langr http://pragprog.com/magazines/2011-02/abstraction

In someways, each of the books I’m reading are tests against the mental model that I’ve built up in my 14+ years of programming practice.  In many cases, the tests reveal issues in my underlying modus operandi. I work through those ideas and attempt to push towards more general solutions.

Case in point, I recently wrote a small command-line application based on a problem domain that I understand – the Diaspora RPG Cluster Creation rules/algorithm.  I used Avdi Grimm’s “Object on Rails” as a general template for what I would do.  I wrote about this experience on my personal blog.

By writing my tests, and testing my mental model of “what is programming”, I was able to personally reaffirm that Avdi’s suggestions are in fact a good practice. Namely that small classes, small methods,  unit tests, and acceptance tests are all integral for making software flexible and maintainable.

Intelligence and Wisdom

“Intelligence is the ability to learn from your mistakes. Wisdom is the ability to learn from the mistakes of others.” - Anonymous

I don’t work with cutting edge technology, however I do work with yesterday’s cutting edge technology. So I look to the experiences of others and try, as best I can, to apply them to my day to day software practice.

To also best learn from others, I’ve been helping out with the Rails pull requests…examining the code and testing the patches, as well as participating to some small degree in conversations concerning the code.

On Being a Lightning Rod

The Gathering Storm

Thus far, I’ve avoided following much of the drama in the Ruby and Rails community. The Rails community prides itself on opinionated software, which naturally lends itself to drama.

During DHH’s keynote at Railsconf 2012, it felt to me that he was attempting to draw fire away from the other members of the Rails core. Regardless of intention, I believe he has it right. Let the creator of Rails, our flag bearer, be the lightning rod.

During the Rails Panel session, one of the attendees asked about the Github issues count. At the time, there were 840 open issues. That is a lot. And there was rightful concern about this.

A challenge was issued that everyone attending should go out and work at updating 3 of the tickets. If everyone did that, we could knock out the queue.

I went and did my part, but unfortunately, all I could do was add comments. I had no power to close a ticket. In fact, its likely that I simply added to the chatter of a ticket.

Declaring Bankruptcy

As I see it, there are two bankruptcy options. The first is closing all the issues. This has been done before when Rails transferred issue tracking from Lighthouse to Github. It was  suggested, perhaps offhandedly, that it needed to be declared again. The second option that I see is delegating the work.

When I was working at my previous job, we had a growing number of issues, and it was a lot to handle. New features were being requested alongside fixes and changes.  I found the issues system to be overwhelming.  I not only had to prioritize, but request more information, then wait for that information to arrive, during that time I’d go through the list again, trying to find work.  There was a tremendous amount of noise.

To solve the problem, a new project manager stepped in and immediately took charge of the issues.  He was hands off about everything else, instead wanting to get a handle on what was obviously causing mental drag on the development team.

Initially, he went through and curated the list…determining each ticket’s state; Was more information required (if so what), was the problem so old that it needed to be re-verified, etc. In doing this, the team was able to get an assessment of where things stood.

Both methods work, but the first method is kind of like Congress punting issues onto next year’s Congress. Things can will done in the short term, but a greater storm is brewing.

Proposed Solution

My proposed solution is that a small group of people (perhaps even one person) be given access to http://github.com/rails/rails to curate issues. I don’t know if the current setup at Github would allow granular permission for managing issues, without managing the code base, but that would be ideal. Failing that, granting someone contributor rights to rails/rails with the express purpose of managing tickets.

To be clear, members of the Issues team would not be writing any Rails production code.  The Issues team would instead focus on making sure that the Development team can easily find actionable issues.

It would be a matter of creating the heuristics for handling bugs. Think about it, presently the rubyonrails.org “Bugs/Patches” link goes directly to the open Github issues. We, as a community, are not providing a lot of guidance for submitting well-formed issues. Yes, I know the Rails Guides has help for submitting issues.

I firmly believe the Rails core members should focus on executing the best technical direction for a framework. There needs to be someone else managing the chatter of the Issues; This chatter creates a mental drag on the day-to-day development of Rails.

I’m willing to be part of the proposed small group who’s responsibility it is to close stale tickets, request further information from those that are insufficient, and make sure a issue for Rails Core is in the best state possible.

I have a day job and a family, but believe that I can contribute a portion of my time to making Rails better, and I see a particular itch that needs scratching.

Planning for a Big Update That Should go Unnoticed

Two weeks ago, the end of life for Ruby 1.8.7 was announced.  This affects several of the applications that I work on – in particular Conductor.  About a year and a half ago, I began the process of updating Conductor to Ruby 1.9, but the update was less than smooth.  There were several libraries that I was dependent on that were not yet updated.

I would love to have helped update several of these libraries, but my time is finite, and I’ve gotta keep the trains running.

So I waited.  And this past week, I decided it was time.  I spent about three days running my tests and fixing what was broken. Mercifully all but one of the libraries that I was working on had been updated. Thank you Open Source Community for working on these libraries.

I decided to update the one library myself.  The change was relatively small.  And eventually, all of my tests were passing.  However, I want to make sure that the version update changes had no impact on the public facing webpages that Conductor manages.

So I wrote another script that executes the following algorithm:

  • Given a list of sites and associated URLs for that site
  • Download the site’s templates
  • Fire up a local webserver with the Ruby 1.8.7 branch
  • Request the site’s URLs against the 1.8.7 branch and store the results
  • Fire up a local webserver with the Ruby 1.9.3 branch
  • Request the site’s URLs against the 1.9.3 branch and store the results
  • Compare, using the UNIX diff command, the results of the 1.8.7 and 1.9.3 branches.  If there are any non-whitespace differences between the two result sets, then investigate.

In this way, I can be reasonably confident that my changes have no impact on the front-facing look of the hosted websites.

Take a look at the source code of the script to see the algorithm in action.

I’m going to update the script to crawl through the various GET requests on the admin, to verify behavior.

Shooting the Trouble

Yesterday, Conductor users began reporting errors with the Paste functionality of the CKEditor, the rich text editor used by Conductor.  The problem manifested on some browsers, but not all – Chrome 17 appeared immune, but there were reports of problems from Firefox 3.6, IE 9, IE 8, Chrome 16.

<Quasi-Religious-Diatribe>

loathe, despise, and detest Paste from Word for HTML content.  It just doesn’t work well.

You would never rely on Google Translate or Babelfish to convert your English brochure to Spanish and then give that brochure to native Spanish speakers.  Did you translate that to  Castillian, Cuban, Mexican, or some other Spanish dialect?  So why would you write your content in Word then paste it into a browser and expect a high fidelity copy?

This is a complicated problem that I wish were resolved.  I understand that HTML can be intimidating…especially when you bring CSS and browser compatibility into the mix…but even the “best translation service” is fairly terrible to a native speaker.

So roll up your sleeves and get comfortable with the HTML.  If you are writing anything for the web, you need to understand it.

And hats off to those few people around the world who maintain the Rosetta Stone for Paste from Word functions.  It is a thankless job that requires working with some truly horrific ever shifting data and attempting as best you can to map that to an ever shifting landscape of browser implementations of HTML.

</Quasi-Religious-Diatribe>

deep breaths…count down from 10, 9, 8, 7, 6, 5, 4, 3, 2, 1…

Okay, I’m back.

The Initial Problem

The problem I was trying to solve was a pernicious Chrome and Safari paste from Word bug. First, lets follow the steps below to see the initial problem.

Step 1: Copy some simple text from Word

Nothing fancy, just a paragraph with a bulleted list.

Copy from Microsoft Word 2008 for Mac

Copy from Microsoft Word 2008 for Mac

Step 2: Paste Text into Chrome

Things look reasonable, though I don’t like the copied bullet. At this point, most people save their page and go about their business.  Its a reasonable thing to do. But step 3 reveals the problem.

Paste to Chrome Step 1

Paste to Chrome Step 1

Step 3: Toggle into Source View Mode

In the source view, there is a <p>&nbsp;</p> that “just appears”.  As it turns out, that little bit of HTML is present in Step 2, but is for some reason invisible in the CKEditor.

Paste to Chrome Step 2

Paste to Chrome Step 2

Step 4: Toggle out of Source View Mode

And the paragraph appears.

Paste to Chrome Step 3

Paste to Chrome Step 3

The Initial Yet Problematic Solution

As always, go to Google when you encounter a coding problem.  And as it turns out the CKEditor paste error is a known problem…without an elegant solution.  What was happening is the Paste action in CKEditor was wrapping the content in a P-tag.

If I were to copy from Word “Simple Paragraph”, the pasted content would be something like  ”<p><p>Simple Paragraph</p></p>”.  Chrome would resolve that by created “<p>&nbsp;</p><p>Simple Paragraph</p>” but not before the CKEditor had rendered Step 2 from above.

CKEditor provides a hook for the Paste event.  So I implemented a solution, but unfortunately it wasn’t adequate for all browsers.  So, I spent a good portion of yesterday investigating what was going on.

Painful Discovery

While I was working on the solution, I discovered something truly horrific.  Each browser receives different pasted values from Word.  And likewise handles this paste differently.

Chrome on OSX

…60+ lines of XML declarations then…
<!--StartFragment-->

<p class="MsoNormal">This is a paragraph</p>

<p class="MsoListParagraphCxSpFirst" style="text-indent:-.25in;mso-list:l0 level1 lfo1"><!--[if !supportLists]--><span style="font-family:Symbol;mso-fareast-font-family:Symbol;mso-bidi-font-family:
Symbol">·<span style="font-family: Times New Roman; font-size: 7pt; ">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
</span></span><!--[endif]-->Item one</p>

<p class="MsoListParagraphCxSpLast" style="text-indent:-.25in;mso-list:l0 level1 lfo1"><!--[if !supportLists]--><span style="font-family:Symbol;mso-fareast-font-family:Symbol;mso-bidi-font-family:
Symbol">·<span style="font-family: Times New Roman; font-size: 7pt; ">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
</span></span><!--[endif]-->Item two</p>

<!--EndFragment-->

Firefox on OSX

…170 lines of style declarations then…
<p class="MsoNormal">
  This is a paragraph
</p>
<p class="MsoListParagraphCxSpFirst" style="text-indent:-.25in; mso-list:l0 level1 lfo1">
  <span style="font-family:Symbol; mso-fareast-font-family:Symbol; mso-bidi-font-family: Symbol">
    <span style="mso-list:Ignore">·
      <span style="font:7.0pt &quot; Times New Roman&quot;">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</span>
    </span>
  </span>
  Item one
</p>
<p class="MsoListParagraphCxSpLast" style="text-indent:-.25in; mso-list:l0 level1 lfo1">
  <span style="font-family:Symbol; mso-fareast-font-family:Symbol; mso-bidi-font-family: Symbol">
    <span style="mso-list:Ignore">·
      <span style="font:7.0pt &quot; Times New Roman&quot;">&nbsp; &nbsp; &nbsp; &nbsp; &nbsp;</span>
    </span>
  </span>
  Item two
</p>

And all of the other browsers receive comparably different information from Microsoft Word.

Final Pasted Value That Is Saved in Conductor

And here we have the “final” pasted value of several different browsers.

Chrome 17 on OS X

<p>This is a paragraph</p><p>·&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; </span>Item one</p><p>·&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; </span>Item two</p>

Firefox 10 on OS X

<p>This is a paragraph</p><ul><li >Item one</li><li >Item two</li></ul>

Safari 5.1.1 on OS X

<p>This is a paragraph</p><ul><li   >Item one</li><li  >Item two</li></ul>

Opera 11.6 on OS X

<p>
  &nbsp;</p>
<p class="MsoNormal">
  &nbsp;</p>
<p class="MsoNormal">
  This is a paragraph</p>
<p class="MsoListParagraphCxSpFirst" style="text-indent:-.25in;mso-list:l0 level1 lfo1">
  <span style="font-family:Symbol;mso-fareast-font-family:Symbol;mso-bidi-font-family:
Symbol"><span style="mso-list:Ignore">&middot;<span style="font:7.0pt &quot;Times New Roman&quot;">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; </span></span></span>Item one</p>
<p class="MsoListParagraphCxSpLast" style="text-indent:-.25in;mso-list:l0 level1 lfo1">
<span style="font-family:Symbol;mso-fareast-font-family:Symbol;mso-bidi-font-family:
Symbol"><span style="mso-list:Ignore">&middot;<span style="font:7.0pt &quot;Times New Roman&quot;">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; </span></span></span>Item two</p>

Chrome on Windows

<div>This is a paragraph</div><div>•<span class="Apple-tab-span" >  </span>Item one</div><div>•<span class="Apple-tab-span" > </span>Item two</div><div><br></div>

IE8 on Windows

This is a paragraph<BR>•&nbsp;Item one<BR>•&nbsp;Item two<BR>

As you can see, the above content varies wildly.  It is the result of three different programs negotiating the Copy/Paste behavior: Microsoft Word, the browser, and CKEditor.  And in many cases, the pasted HTML is quite terrible – I’d say only Firefox and Safari are proper HTML given the source Word document.

The Current Solution to the Paste Issue

Below is the code that I have settled on for fixing the original paste from Word problems for Chrome and Safari. Below is the code that I tried to use but failed. But keep reading after the solution:

CKEDITOR.on("instanceReady", function (ev) {
  ev.editor.on("paste", function (e) {
    if (e.data["html"]) {
      // Strip lang, style, size, face, and bizarro Word tags
      var input = e.data["html"].replace(/<([^>]*)(?:lang|style|size|face|[ovwxp]:\w+)=(?:[^]*|""[^""]*""|[^\s>]+)([^>]*)>/gi, "<$1$2>");
      var output = ;

      // The Paste action in CKEditor was wrapping the content in a p-tag;
      // By only using the innerHTML of the first element, the auto wrapping
      // of a p-tag instead wraps the first element in a p-tag.
      // So pasting: <p>Hello</p><p>World</p>
      //   Was Pasted as <p><p>Hello</p><p>World</p></p>
      //   Resolves as <p>&nbsp;</p><p>Hello</p><p>World</p> AND
      //     the <p>&nbsp;</p> was invisible
      //   Paste as Hello<p>World</p>
      //   Resolves as <p>Hello</p><p>World</p>
      // I have trepidations about this, but it appears to work in a
      // relatively general case.

      // Internet Explorer may not paste well-formed HTML, but instead
      // paste innerHTML
      if ($(input).html() == "" ) {
        output = input;
      } else {

        // Iterate over the top-level DOM elements
        $(input).each(function(key,value){

          // For the first top-level DOM element, we want the innerHTML, so
          // that it can be wrapped by a P-tag…either in the Browser or in
          // the CKEditor
          if (key == 0) {
            output += value.innerHTML;
          } else {
            // outerHTML exists in some browsers as a native property
            // It is likely more reliable than the html method (in fact
            // in Chrome, $(<div>Bob</div>).html() returned Bob)
            if (value.outerHTML == undefined) {
              output += $(value).html();
            } else {
              // Likely more reliably than html(), as it is a native browser
              // method in some "modern" browser
              output += value.outerHTML;
            }
          }
        });
      };
      e.data["html"] = output;
    }
  });
});

Eventually, I opted to clean the HTML on the server side using the following regular expression:

text.sub!(/\A(\<p[^\>]*\>[\t\s]*\&nbsp\;[\t\s]*\<\/p\>)*/m,"")

HTML Emails Inserting Spaces in Odd Locations

About two weeks ago, my team received a report of a problem in one of our system generated emails.  A small handful of the words in longer paragraphs were being split.

For example, there was a long paragraph (200 words or so), and the word “condition” was split into “condit ion” – a strange problem but one related to a previously discovered limitation in the venerable yet pervasive sendmail program which we used for delivering the emails.

The Challenge

Sendmail splits long lines after the 998th character.  It does this by adding a carriage return (like hitting Return on your keyboard).  What was happening is the “t” in “condition” was at the 998th character.  Further muddying the water, was the fact that we are dealing with escaped HTML, so a quote (“) is actually represented as &quot; And there were also tags which are invisible to a human.

The Fumbling

I was aware of the 998th character issue of sendmail, but didn’t know of a good work around.  I started chatting with Jaron, a fellow Notre Dame programmer and good friend of mine, about the problem.

Both of our initial understandings of HTML emails was that they simply worked.  Which clearly was not the case.

Important Sidebar

Instead of starting the chat by stating the root cause, I solicited a request for help with my proposed solution – a regular expression to add a carriage return after every period, so long as it wasn’t part of an attribute of an html-tag.

Thankfully, I only spent a four minutes going down that path before I stated the root cause – carriage returns were being injected into an HTML email and it was breaking words.

When looking for help with a problem, don’t ask for help on a problem related to your proposed solution. Instead clearly state your understanding of the initial problem.  Then state your proposed solution for correcting the problem.

The Solution

After a lot of trial and error, we eventually settled on setting the email’s HTML part’s Content-Transfer-Encoding to base64, and encoded the HTML part in base64.

Below is our Rails 3.0.11 solution, it hasn’t been “cleaned up” but it highlights the key take-aways:

# Rails.root/app/models/notifier.rb
class Notifier < ActionMailer::Base
  def general_email
    # important configuration stuff
    # setting @object for template access
    mail { |format|
      format.text
      format.html(:content_transfer_encoding => base64)
    }.deliver
  end
end

# Rails.root/app/views/notifier/general_email.html.erb
<%= Base64.encode64(%(<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN"
   "http://www.w3.org/TR/html4/loose.dtd">

<html lang="en">
<head>
  <meta http-equiv="Content-Type" content="text/html; charset=utf-8">
  <title>#{@object.subject}</title>
</head>
<body>
  #{ @object.body.html_safe }
</body>
</html>))%>

And Now for Something Completely Different

In November I took over the account management of one of our clients on campus.  I say clients somewhat hesitantly, as our department has a cost recovery model in which we bill other departments for work.  Not everyone on campus operates this way, so we have a very interesting ecosystem.  But I digress.

I haven’t set aside my other responsibilities, but am instead adjusting the load to account for these changes.  And what I’m finding is moments of insight while traversing the maelstrom of my new and old responsibilities.

Lesson #1 – Clients are Demanding

While this may sound terrible, it is actually great.  Imagine if Mozart’s father hadn’t been demanding?  Without the external pressure to deliver, complacency is just around the corner.

I used to work for a small insurance agency, and one of my friend’s father was a very demanding client.  He would have all kinds of wild requests that seemed technically unfeasible.  Many people bemoaned his demands, but without him our company’s sales would have flattened.

He demanded that we all go to the moon, and our job was to help him build the spaceship.  Oh wait, that’s John F. Kennedy.

Lesson #2 – Clients Are Wrong

…then again so am I. A client understands their problem space, but may not understand the best steps towards the appropriate solution.  With an external lense, which my team can provide, we provide a different perspective on the problem.

There is a lot of potential for misunderstanding, as both sides are crucial for delivering the solution that will be an amalgam of all those involved.

And I know this can be frustrating.  Developers are notoriously cynical and often full of hubris.  I know I’ve fallen into the trap of thinking I know something as well as a client – or even better than them.  When in reality they have an abundance of nuanced knowledge that is never immediately obvious.

So if I think someone is wrong, I need to listen again, because they are saying something that is very likely right, but maybe they are not saying it in a way that I’m expecting to hear it.

Lesson #3 – Clients Want Regular Updates

And this is a good thing, it keeps us accountable for delivering their solution. I’ve found that transparent conversations are both appreciated and truly build a team. And, as an added benefit, our regular updates can help others be accountable for their part.

And this all boils down to practicing basic communication.  Yes people will be upset if their site isn’t done on time but it is better to find out about an issue early. Then the entire team, which has built trust (see Lesson #2), can rally and address those issues.  The resulting conversations may not be pleasant, but it’s better than the conversation you may have had later down the road.

Lesson #4 – I’ve Only Got One Head

And it’s really hard to wear more than one hat.  I’m working really hard to serve the needs of our client, but this is coming at a cost.  I’m not able to wear my developer hat as often as I used to – it’s a comfortable old hat that I’ve grown quite fond of.

I’m not the best developer, though I think I’m pretty good.  I’ve learned many of the lessons that I’d like to have learned, but I see other lessons to be learned that have nothing to do with machine compiled code, and I look forward to incorporating that into my personal and professional development.

 

 

 

Multiple SSL Certificates on One Apache Server

Conductor serves many different sites at the University of Notre Dame, however, not all sites in Conductor are under the nd.edu umbrella – www.holycrossusa.org is one of them.

SSL Error in Chrome on www.holycrossusa.org

SSL Error in Chrome on www.holycrossusa.org

One problem that arose is that non-administrative users needed to securely access the site.  As it was configured, anyone going to https://www.holycrossusa.org/ using the Chrome browser would see the following SSL certificate warning.  Other browsers would give even more pessimistic notifications.

What was needed was a separate SSL certificate on the Conductor for www.holycrossusa.org. The big gotcha is that most other documentation I found says to set the NameVirtualHost to the server’s IP address.  And that means the internal, or Local-IP address, as provided by the %A custom log format directive of Apache.  If you use your server’s Public-IP address, things may not work.

Below is the relevant /etc/httpd.conf entries.

# Configuration for conductor.nd.edu
NameVirtualHost [LOCAL-IP-ADDRESS-1]:80
NameVirtualHost [LOCAL-IP-ADDRESS-1]:443

<VirtualHost [LOCAL-IP-ADDRESS-1]:80>
	ServerName conductor.nd.edu
	ServerAlias *.conductor.nd.edu
	Include conf/apps/conductor.common
</VirtualHost>

<VirtualHost [LOCAL-IP-ADDRESS-1]:443>
	ServerName conductor.nd.edu
	ServerAlias *.conductor.nd.edu
	Include conf/apps/conductor.common
	RequestHeader set X_ORIGINAL_PROTOCOL https

	SSLEngine on
	SSLCertificateFile /path/to/conductor.crt
	SSLCertificateKeyFile /path/to/conductor.key
	SSLCACertificateFile /path/to/conductor.intermediate.crt
</VirtualHost>

# Configuration for www.holycrossusa.org
NameVirtualHost [LOCAL-IP-ADDRESS-2]:80
NameVirtualHost [LOCAL-IP-ADDRESS-2]:443

<VirtualHost [LOCAL-IP-ADDRESS-2]:80>
	ServerName www.holycrossusa.org
	ServerAlias www.holycrossusa.org
	Include conf/apps/conductor.common
</VirtualHost>

<VirtualHost [LOCAL-IP-ADDRESS-2]:443>
	ServerName www.holycrossusa.org
	ServerAlias holycrossusa.org
	Include conf/apps/conductor.common
	RequestHeader set X_ORIGINAL_PROTOCOL https

	SSLEngine on
	SSLCertificateFile /path/to/holycrossusa.org.crt
	SSLCertificateKeyFile /path/to/www.holycrossusa.org.key
	SSLCACertificateFile /path/to/holycross.intermediate.crt
</VirtualHost>