AIAIO: Our Blog

AIAIO: Our Blog

The pulse and reviews of Alexander Interactive

Everyone is “the business”

For technologists who spend much of their time eyes-deep in the tools, platforms, and architectural drivers that great solutions require, it can be easy to feel isolated from the surrounding business. The business goals of the project and the financial context that surrounds and constrains it become, through the necessary processes of business analysis and project planning, several layers removed from the technology team’s internal representation of the project. Components to be built, system and application architectures, UML diagrams, task tickets, and burn down lists may be necessary constructs to run a development project, but they retain little to no understanding of the business context.

Just as the syntax of programming languages by and large lacks the ability to communicate the system architecture, the artifacts of project management and development planning lack the crucial ability to communicate the business context.

This is why the business and technology teams so often feel like separate factions, each harboring gripes about how the other lacks the context to understand the decisions that need to be made. (One beautiful and refreshing aspect of Ai’s culture is that it has the opposite character: we are pretty cozy here, despite our spacious office, and our size helps us to manage and minimize these divides.)

And here’s one big reason why “technical debt” has become a hot topic: it’s a concept that can do wonders to bridge the communication gap that often develops between technology and business teams over the course of a project.

This gap is itself a project risk, and as with all risks, we need to find the right tools to understand and mitigate it. Thus the (justified) popularity of “technical debt”.

But just as “technical debt” is a useful term for mitigating this risk, other terms can broaden or enforce the border around an IT team. One, a term more pervasive in larger organizations, is “the business”. This is how IT project managers often refer to that shadowy side of the organization that issues commands from its isolated realm, dictates that the IT group must translate into actions and project plans.

“The business” will place constraints of budget, schedule, platform on the project. (All too often, these get delivered to us via another unfortunate neologism: “the ask”. “Ask” is a verb. It just is. It’s a verb. When someone drops “the ask” on the meeting table, they’re presenting a hard object with no creator: “the ask” has arrived, ineluctable and undebatable.)

“The business” will make midstream decisions seemingly ignorant of implications to the project’s technological commitments. “The business” also, of course, writes the checks, so we feel we have no absolute leverage.

It is in dealing with these constraints from “the business” that we regularly incur technical debt: to meet a deadline, or to facilitate a sudden change in requirements, we commit to compromises in the code or architecture that we know we’ll need to fix someday. (“Someday”: as with financial debt, technical debt is a tool at your disposal; but you have to fix a date to this payment to keep your debt from ballooning.)

The key lesson here is that we can’t conceive of these decisions as technology decisions. As Steve McConnell notes, “At the end of the day, all [technical] decisions made in this context are business decisions.” The business and technology teams are partners in the success of a project. Business decisions must take technology into account, and vice versa.

We can take it a step further, in fact: there is no “the business”. Each one of us is The Business.

Considering ourselves, the technologists, to be The Business means internalizing that each line of code we write, each component we build, each compromise we make affects the business context of the project, and ultimately the success of the wider organization. What’s the business value of documenting this code? What’s the business value of building this test script? There’s no reason why QA engineers and developers should labor in the absence of this notion of business value. Knowing the business context helps us make intelligent decisions, spend our time and energy wisely to focus on value, not problems or minutiae.

If we erase this construct of a separate “business”, we give the project a huge leg up: now, project direction needs to include business and technological context. Now, we are forced to have cross-disciplinary conversation around difficult decisions. Now, when a high-value and very difficult requirement becomes an architectural driver for the technology team, we can understand and plan around this big hurdle in the context of its overall importance.

The interest in recent years in managing technical debt is just this: an increasing interest in fostering common understanding around difficult technical decisions, and in providing institutional memory of debt incurred so that the organization can agree, and remember, to pay down that debt in the future.


Adding more security to your Pound and Varnish configuration

I recently needed a way to add SSL to varnish and decided to give Pound a try. There are some great howtos available on the web, but there is one thing I don’t like about the suggested configurations. The general suggestion is to add this to your pound config

HeadRemove "X-Forwarded-Proto"
AddHeader "X-Forwarded-Proto: https"

and then to check for that header in your application, varnish, or whereever you need to check for SSL. However, by manually sending an “X-Forwarded-Proto: https” header directly to varnish on port 80, you can trick your backend application into thinking you are requesting information over HTTPS when you aren’t. While I don’t think this is exploitable by itself, I certainly don’t want to leave any room for hacker mischief.

My suggestion is to add one additional secret header in your pound config, and then sanitize the headers in varnish if that secret header is missing. For example, in my pound config, I added this:

HeadRemove "X-Forwarded-Proto"
AddHeader "X-Forwarded-Proto: https"

And in my varnish vcl_recv:

if (req.http.X-Pound == "PUTARANDOMSTRINGHERE" && req.http.X-Forwarded-Proto == "https") {
    unset req.http.X-Pound;
    #take any extra needed actions for SSL here
} else {
    unset req.http.X-Pound;
    unset req.http.X-Forwarded-Proto;

Now when I check for the X-Forwarded-Proto header in my application, I can be sure that the client really is making the request over HTTPS. Notice that I always remove the X-Pound header after I’ve checked for it, even if it is valid. There is no need for the application to ever see that header – no need to risk any potential leakage of my secret header, which could potentially happen if a debug setting is ever left on in the application.


What’s Next for Ecommerce?

Michael Zeisser of Libery Media delivered an engaging presentation at the 2012 conference on the history of the Internet.  He outlined five primary phases of the industry in a concise and informative manner.  His offer that each phase or shift has generally lasted about 3 to 5 years was presented thoughtfully and supported by data.  If the theory is to be believed, we are soon approaching the next fundamental shift.

Briefly, Zeisser’s history lesson follows shifts from the days of the dial-up ISPs to the mobile device expansion of today:

  1. ISP as Content – AOL, Prodigy, Compuserve
  2. Web Portals – Yahoo, Lycos
  3. Search – Google
  4. User Generated – Social, Facebook, Twitter
  5. Mobile – apps, mobile web, we walk around with the Internet in our pockets

Ziesser stopped short of offering a prediction or opinion on what might be the next phase of the industry, but postulated that it is imminently upon us.  My focus is on how the next shift in the industry will impact (or in many cases be directly impact by) the world of digital commerce.  I am similarly avoiding predictions, but have contemplated those technologies that are certain to be at the center of it all.

So what will the next fundamental shift in ecommerce be?

Big Data

Beyond the hype and buzzword, when we finally get around to analyzing and making use of the petabytes of data that we as online merchants collect on our shoppers the experience of finding products will forever be changed.  This concept goes way beyond “you may also like” recommendations.  The data exists and the statistical techniques are already invented that can quite accurately predict exactly what I’m looking for based on a number of things that a website knows about me: my location, the keywords I typed at a search engine, my past shopping history, the time of the day.  Now we have to fully deploy this knowledge in the form of a consumer experience that “just knows.”  It’s a weeknight at 9pm in the summer, I’m likely watching the Yankee game on TV, surfing Amazon on my iPad, and just saw a commercial for a product.  The site should just know.  Play the odds and guess what i’m doing based on everything it knows about me.  It will take a few years to hone the algorithms, but I fully expect we’ll get there.  Scary.


Checking sports scores on Siri is just the beginning.  Combine Big Data with a semantically aware assistant who really understands what you want and the concept of browsing a website and clicking or smudging around will forever change.  As craftsmen of the visual user experience for online retail, the notion that our beautiful and highly-converting designs may one day join the annals of Internet phases past is terrifying.  But I believe it’s true.  And designing ecommerce experiences around voice will the next frontier.

Alex: Siri, my wife said we need diapers.

Siri: You probably mean the Size 3 Swaddlers for Nina. can have them to you tomorrow for $20.  Shall I order them?

Alex: Yes, and have them send a gift for my wife.

Siri: They recommend this bracelet to go along with the earrings you bought her last year for your anniversary.  Shall I add them to the order?

Alex: Yes, thanks.

Siri: Forever in your service, Alex.

Same-Day Delivery

The retail industry continues to evolve to one of on-demand fulfillment.  The majority of American populace will soon live close enough to a major distribution center (DC) capable of trucking an item to your front door the same day you order it.  Amazon’s finally getting around to deploying the shiny robots they bought when they acquired Kiva, and it’s estimated that robot automation will increase the items a single warehouse picker can gather from 160 an hour to 600 an hour.  If free 2-day shipping is the norm for 2012, will consumers expect free 2-hour shipping by 2014?  (I know many CFOs that sure hope not.)

3D Printing

For less than $3,000 you can now buy a high quality 3D printer capable of creating intricate products out of plastic.  More materials, larger sizes, and decreasing prices are coming soon.  Forget same-day delivery–you want that new case for your iWhatever?  Order it (by voice) from and it will spit out of your 3D printer.  Marketplaces of interested designers have already started to grow and it’s just a matter of time before major consumer brands get into this space.

These are but 4 areas–Data, Voice, Delivery, and Printing–that are sure to play a major role in the future of ecommerce.  Admittedly, it’s awfully shortsighted to not consider the impact of global ecommerce growth on the next fundamental Internet shift.  China should overtake the US online retail market by 2013 or 2014.  And what’s to keep the Chinese manufacturers of most of the products we buy from building their own DCs all over the US and Europe and selling direct to consumers?  (Answer: nothing, they’re going to do it and cut out the American/European middleman some day.)

Wherever it happens, I endorse Zeisser’s model and there’s no question that we’re sitting on the precipice of the next fundamental shift in our industry.


Redefining the Post-Mortem Meeting

Thinking back to my first time seeing the meeting subject title ‘Post Mortem for (insert project name that went horribly awry)’ pop up in my Inbox …I remember hitting ‘Accept’ somewhat reluctantly. My mind quickly concocted a visual of a mock funeral for said project, the people there didn’t really like the project, but they attended anyway…out of respect. Afterward they talked about a few good qualities, but mostly complained about it before going back to business as usual.

Yes, a little strange maybe, but that odd visual story in my head proved to be accurate for most Post-Mortem meetings attended in the years that followed. Different agencies, different projects, but they all usually played out in the same way. Typically, one of these meetings would be scheduled only after a project that was riddled with issues, blown budgets & missed deadlines. As for projects that went tremendously well? No need for a Post-Mortem, we’re awesome, go team!

Changing the Perception

Unfortunately, these after-the-fact meetings usually have a negative connotation attached to them. People attend with their backs up, ready to defend their role on the project, air grievances, and place blame elsewhere. Luckily, it doesn’t have to be this way. When it comes down to it, team members want the projects they take part in to be successful. Changing the perception of how a Post-Mortem is perceived is crucial to future success on projects with that specific client, and your company’s process as a whole. Enacting this change is done by focusing on the holistic view of how your company evolves its process over time, not just what they should have done in hindsight on that one project.

Below are the tenants that should always be top of mind for anyone planning on conducting a Post-Mortem successfully. If you stay true to these items, your team will start to view these meetings as a beneficial aspect of the project and you will see the improvements in future endeavors.

1) Keep the meeting structure simple

There are quite a few meeting outlines that exist out there, but they all really break down into five main components. At Ai, the following structure for Post-Mortem meetings has proven very successful.

• What has been working?
• What has not been working?
• What was painful but necessary?
• What did you learn about working with this particular client?
• Any recommendations that we should implement into future processes?

This breakdown requires the team to begin with positive aspects of the project, and end with forward-thinking process improvement ideas to help set an optimistic tone and shift the perception away from the negative. It’s tempting to gloss over everything but that pesky second bullet, but it is so important to make sure all aspects – good and bad – are discussed.

2) Ensure the attendees are prepared ahead of time

By nature, Project & Account Managers are organized. Keeping the client happy, the projects successful, and the team working efficiently is par for the course. This includes getting your Post-Mortem meeting outline in order. But these goals are not always the main focus of the team members executing the deliverables. They are focused on their daily tasks at hand, whether it involves getting a Strategy Recommendation out the door, or the third revision of creative done in time to hand off to Technology. Basically, people are busy and this could fall low on their list of things to get done.

To sidestep any probable delay in receiving feedback, send out a list of questions to the staff at least one week before the meeting. Put a reminder on their calendar, asking them to send responses by a specific date. This forces team members to really think about the answers. If you ask people to physically type out their feedback, you will find the content will be more pointed & specific. People will instinctively recognize in their bulleted list what is legitimate, and what is just whiny.

3) Time It!

The recommended time for a Post-Mortem is no longer than 1.5 hours. Sometimes this can be difficult, especially if there are too many missteps to count. The organizer can sidestep this by identifying overlapping problem areas received in the initial feedback and integrating them into one focus point. Each bullet point has a specified time allotted and, once you reach the maximum time for that item, assess whether it is necessary to schedule a follow-up meeting.

4) Introduce the Mini-Mortem

A few months ago, a PM was trying to see what she could do to correct a list of growing issues on a hectic project…then a light bulb went off. Why wait until after a project has come to an end to course correct issues and highlight achievements? By placing a ‘Mini-Mortem’ at the halfway point of the project, the team as a whole was able to identify problem areas and pain points before the project is over. By providing them a means to voice these concerns and call out things they feel are working well, it allows the Account Managers ample time to refocus efforts where needed. Again, it’s important not to ignore the positive aspects, this is a great time to leverage what has been working well and build upon it.

5) Apply Lessons Learned To the Next Project

When Post-Mortem meetings occur after a project, often times whatever learnings are captured tend to be quickly forgotten. The information shared between coworkers during these meetings is on some level remembered, and corrections of previous issues happen organically, but this isn’t enough. At some point people will roll off and new members will transition onto a piece of client business. If tangible steps aren’t taken to capture the valuable information shared during a Post-Mortem, the ever important ‘Next Steps’ will never be implemented. When mistakes aren’t corrected, these meetings tend to be viewed as a time-suck. Why bother meeting if management isn’t going to fix it the next time around?

When a new piece of work gets underway, make sure there is time allotted to review the previous Post-Mortem notes along with the Next Steps from that meeting. Below is an example of one item that showed up on the whiteboard of a Post-Mortem, and it’s Next Step:

“ Having multiple work-in-progress meetings scheduled with the client each week was great in that we got buy in on our ideas throughout the process, but towards the end of the project we needed less meetings and more time to focus.”

Next Step:
PM to check in with the creative team each Monday, at this time we will assess what WIP’s are needed that week. We will also shift the 9:00am scheduled time to 5:30pm to allow creative to be ready.

A Happier Team

By implementing the steps above, you will begin to shift the overall attitude around how the Post-Mortem meeting is perceived by your coworkers. So start changing the perception, assign next steps and hold the team accountable. Next time a new piece of work rolls around, reserve a slot of time to refer back to the items that came up in the last Post-Mortem. Make sure to highlight the good and bad, although correcting mistakes is crucial…touching upon what the team excelled at will boost morale and remind everyone that ‘it wasn’t all bad’.

And lastly…can we please change the name of this meeting?


Stop & Stor Goes Mobile!

Earlier this summer and on the tails of a full-site redesign, Ai Emerge launched a mobile site for Stop & Stor.

After the launch of the full-site redesign, mobile accounted for 25% of site traffic and was trending up. This was the perfect opportunity to mobilize!

The new mobile engagement included discovery, user experience, design, and development work. The discovery and user experience phase was crucial in providing a strong baseline to begin the project. Our strategy was to look at the quantitative data to help drive our recommendation, but to also get in mindset of the end-user: busy people searching for storage in a big city. In doing so, we were able to identify the most useful features to include in the mobile site. These features included simple and straightforward search options, easy ways to contact locations, and pay bills on the go.

In the design phase, we aimed to uphold the brand identity while effectively translating it to a mobile platform. We optimized graphics to target not just the iPhone but other popular mobile devices like the Android, as well as tablet. We designed graphics to load quickly and degrade gracefully on older devices.

During the development phase, we leveraged and optimized the existing CMS so the business had a streamlined process to make site updates across both full-site and mobile platforms. This helped to minimize overhead for updating content. We also ensured that the interface behaved just as efficiently for the end-user. We implemented mobile best practices such as using advanced device detection so the site would load properly on any device, while including the option for the user to toggle their interactive experience from desktop to mobile.  In addition, we used mobile-specific fields, forms and interactions to enhance accessibility and usability across devices.

Since the site launched in July 2012, mobile traffic has already improved! Time spent on the site along with usage of the reservation system has increased resulting in a -21% change in bounce rates, a 13% increase in average visit duration, and a 93% increase in utilization of the online reservation system.  Because of the improved mobile user experience, increased usage of site functions and time spent has shown that users are much more engaged than before.

We are so excited about this result and look forward to continuing our partnership with Stop & Stor.


Getting a “Head” on Selenium headless testing

A headless Selenium testing system is an ideal addition to any development workflow. Selenium reduces testing time, and integrates into CI tools such as Jenkins. The benefits are great, but only *if* you can get your tests to run. One of the most time consuming problems that can arise from a headless system is a failed test. How can you debug, when you can’t see the browser? Ideally, you will test your scripts locally before running them on the headless system, but anyone who has even dabbled in the world of automation knows that a single procedure can yield very different results on different systems and in different environments. You can deal with the differences, but first you have to see them.  After much searching, I could not find a straight answer as to how to export this display to my Windows machine from our Linux server. It is really quite simple, and can be done in a few steps.  In order to display a headless Selenium test on a Windows machine from a Linux server, you must first be able to have an X window (X11) server running on your Windows machine. This is the underlying window display environment common to Unix systems. I used Xming, as it is the easiest to use and involves almost no setup.

You can download Xming from here.

When the installer finishes, run the Xming server. This doesn’t do anything that you can see, but starts the X11 environment on top of Windows.  Now you need PuTTY, the SSH client.

You can download PuTTY here.

Now that you have installed PuTTY, run the executable. In the left hand category listing for the settings, click on the X11 tab and enable X11 forwarding.

Type in the host name or IP of the remote machine and connect. Type “firefox” in the command line of your PuTTY shell (this assumes you have Firefox on your system).  This should pop the Firefox browser!  Now that you can get the remote Firefox instance to show on your Windows machine, you can run Selenium through PuTTY and debug on the fly with the command:

java -jar (PATH-TO)/selenium-server.jar -trustAllSSLCertificates \
        -htmlSuite BROWSER URL  (PATH-TO)/SUITE  (PATH-TO)/LOG

Generating an InlineAdmin Form on the fly in Django

I’m adding drag/drop uploading to the django admin for one of our open source projects called Stager. A blog post about that will follow, it’s not screen-shot ready yet. While doing this I knew we needed a pretty seamless transition after the upload finished, and that we would have to refresh the inline. I didn’t want a full page refresh, so let’s ajax it in.

For these examples just assume that we have a parent CompAdmin which has an model of Comp and an inline called CompSlideInline. We store the instance of the Comp in comp.

from django.template import loader, Context
from django.contrib.admin import helpers
from django.db import transaction
from django.contrib import admin

comp = Comp.objects.get(id=comp_id)
#get the current site
admin_site =
compAdmin = CompAdmin(Comp, admin_site)

#get all possible inlines for the parent Admin
inline_instances = compAdmin.get_inline_instances(request)
prefixes = {}

for FormSet, inline in zip(compAdmin.get_formsets(request, comp), inline_instances):
    #get the inline of interest and generate it's formset
    if isinstance(inline, CompSlideInline):
        prefix = FormSet.get_default_prefix()
        prefixes[prefix] = prefixes.get(prefix, 0) + 1
        if prefixes[prefix] != 1 or not prefix:
            prefix = "%s-%s" % (prefix, prefixes[prefix])
        formset = FormSet(instance=comp, prefix=prefix, queryset=inline.queryset(request))

#get possible fieldsets, readonly, and prepopulated information for the parent Admin
fieldsets = list(inline.get_fieldsets(request, comp))
readonly = list(inline.get_readonly_fields(request, comp))
prepopulated = dict(inline.get_prepopulated_fields(request, comp))

#generate the inline formset
inline_admin_formset = helpers.InlineAdminFormSet(inline, formset,
            fieldsets, prepopulated, readonly, model_admin=compAdmin)

#render the template
t = loader.get_template('admin/staging/edit_inline/_comp_slide_drag_upload_ajax.html')
c = Context({ 'inline_admin_formset': inline_admin_formset })
rendered = t.render(c)

Lock your own processes in Magento

Inevitably, when you get into the weeds with a Magento build, you’ll need to run some big, hairy process — perhaps even on a regular basis. Perhaps it involves processing images, or updating data, or creating reports. Perhaps it runs on cron, but you’d also like to run it from the command line. And if it’s destructive — i.e., it alters data — you definitely want to make sure you’re only running one at a time.

Magento has a nice semaphore locking process handler built in to its indexers. In order for a reindexing process to run, the process needs to obtain a lock. The code lives in the Mage_Index_Model_Process class, and has helpful methods such as isLocked(), lockAndBlock(), and unlock(). These look for and manipulate files in your application’s var/locks directory.

The implementation is a tried and true locking mechanism that you could write yourself, but why bloat an already massive code base? We can just repurpose this relatively self-contained functionality whenever we need to lock a process.


Ai ranks among 50 Most Engaged Workplaces™

Achievers has announced this year’s 50 Most Engaged Workplaces™– the award aims to inspire enhancements to the workplace by championing the growth of employee-centric organizations. Ai is thrilled to be ranked among the top US organizations.

The judges read, re-read and compare survey questions related to the Eight Elements of Employee Engagement™: Leadership, Communication, Culture, Rewards & Recognition, Professional & Personal Growth, Accountability & Performance and Vision & Values.  Forms are filled out without any company names so submissions can be judged with as much anonymity as possible.

We are delighted to have made the cut!

To Achievers-we love what you stand for. We agree employees are a company’s greatest asset; when companies empower them to succeed and recognize performance, not presence, the employees and the business both reap the benefits. Thank you– we are honored to be among the 2012 50 Most Engaged Workplaces™.


Getting Started with Solr and Django

Solr is a very powerful search tool and it is pretty easy to get the basics, such as full text search, facets, and related assets up and running pretty quickly. We will be using haystack to do the communication between Django and Solr. All code for this can be viewed on github.


Assuming you already have Django up and running, the first thing we need to do is install Solr.

curl -O
cd apache-solr-4.0.0-BETA
cd example
java -jar start.jar

Next install pysolr and haystack. (At the time of this writing the git checkout of haystack works better with the Solr 4.0 beta then the 1.2.7 that’s in pip.)

pip install pysolr
pip install -e

Add ‘haystack’ to INSTALLED_APPS in and add the following haystack connection:

    'default': {
        'ENGINE': 'haystack.backends.solr_backend.SolrEngine',
        'URL': ''

Full Text Search

For the example, we’re going to create a simple job database that a recruiter might use. Here is the model:

from django.db import models
from import models as us_models

    ('pt', 'Part Time'),
    ('ft', 'Full Time'),
    ('ct', 'Contract')

class Company(models.Model):
    name = models.CharField(max_length=64)
    address = models.TextField(blank=True, null=True)
    contact_email = models.EmailField()

    def __unicode__(self):

class Location(models.Model):
    city = models.CharField(max_length=64)
    state = us_models.USStateField()

    def __unicode__(self):
        return "%s, %s" % (, self.state)

class Job(models.Model):
    name = models.CharField(max_length=64)
    description = models.TextField()
    salary = models.CharField(max_length=64, blank=True, null=True)
    type = models.CharField(max_length=2, choices=JOB_TYPES)
    company = models.ForeignKey(Company, related_name='jobs')
    location = models.ForeignKey(Location, related_name='location_jobs')
    contact_email = models.EmailField(blank=True, null=True)
    added_at = models.DateTimeField(auto_now=True)

    def __unicode__(self):

    def get_contact_email(self):
        if self.contact_email:
            return self.contact_email

The next step is to create the SearchIndex object that will be used to transpose to data to Solr. save this as in the same folder as your The text field with its template will be used for full text search on Solr. The other two fields will be used to faceted (drill down) navigation. For more details on this file, check out the haystack tutorial.

class JobIndex(indexes.SearchIndex, indexes.Indexable):
    text = indexes.CharField(document=True, use_template=True)
    type = indexes.CharField(model_attr='type', faceted=True)
    location = indexes.CharField(model_attr='location', faceted=True)

    def get_model(self):
        return Job

    def index_queryset(self):
        return self.get_model().objects.all()

Create the search index template in your template folder with the following naming convention: search/indexes/[app]/[model]_text.txt
For us, this is templates/search/indexes/jobs/job_text.txt

{{ }}
{{ object.description }}
{{ object.salary }}
{{ object.type }}
{{ object.added_at }}

Now, lets get our data into Solr. Run ./ build_solr_schema to generate a schema.xml file. Move this into example\solr\conf in your Solr install. Note: if using Solr 4, edit this file and replace stopwords_en.txt with lang/stopwords_en.txt in all locations. To test everything and load your data, run: rebuild_index Subsequent updates can be made with: update_index.

If that all worked we can start working on the front-end to see the data in Django. Add this to your

(r'^$', include('haystack.urls')),

At this point there are at least two templates we’ll need. One for the search results page, and a sub-template to represent each item we are pulling back. My example uses twitter bootstrap for some layout help and styling, see my base.html here if interested.

Create templates/search/search.html
This gives you a basic search form, the results, and pagination for a number of results

{% extends 'base.html' %}

{% block hero_text %}Search{% endblock %}
{% block header %}
Click around!

{% endblock %}

{% block content %}</pre>
<div class="span12">
<form class=".form-search" action="." method="get">{{ form.as_table }}
 <input type="submit" value="Search" /></form></div>
{% if query %}</pre>
<div class="span8">
<div id="accordion2" class="accordion">{% for result in page.object_list %}
 {% include 'search/_result_object.html' %}
 {% empty %}

No results found.

 {% endfor %}</div>
 {% if page.has_previous or page.has_next %}
<div>{% if page.has_previous %}<a href="?q={{ query }}&page={{ page.previous_page_number }}">{% endif %}« Previous{% if page.has_previous %}</a>{% endif %}
 {% if page.has_next %}<a href="?q={{ query }}&page={{ page.next_page_number }}">{% endif %}Next »{% if page.has_next %}</a>{% endif %}</div>
 {% endif %}</div>
{% else %}

{% endif %}
{% endblock %}

And the templates/search/_result_object.html

{% with obj=result.object %}</pre>
<div class="accordion-group">
<div class="accordion-heading"><a class="accordion-toggle" href="#collapse_{{ }}" data-toggle="collapse" data-parent="#accordion2">
 {{ }}
<div style="padding: 8px 15px;">
Company: {{ }}

Type: {{ obj.type }}

 {% if obj.salary %}
Salary: {{ obj.salary }}

{% endif %}

Location: {{ obj.location }}</div>
<div id="collapse_{{ }}" class="accordion-body collapse in">
<div class="accordion-inner">
Contact: <a href="mailto:{{ obj.get_contact_email }}">{{ obj.get_contact_email }}</a>

 {{ obj.description }}</div>
{% endwith %}

Start up your dev server for search!

Related Items

Adding Related Items is as simple as using the related_content tag in the haystack more_like_this tag library and tweaking out Solr config. Open up solrconfig.xml and add a MoreLikeThisHandler within the tag:

<requestHandler name="/mlt" class="solr.MoreLikeThisHandler" />

Our full _result_object.html now looks like this:

{% load more_like_this %}

{% with obj=result.object %}
<div class="accordion-group">
    <div class="accordion-heading">
        <a class="accordion-toggle" data-toggle="collapse" data-parent="#accordion2" href="#collapse_{{ }}">
            {{ }}
        <div style="padding: 8px 15px;">
            <p>Company: {{ }}</p>
            <p>Type: {{ obj.type }}</p>
            {% if obj.salary %}<p>Salary: {{ obj.salary }}</p>{% endif %}
            <p>Location: {{ obj.location }}</p>
    <div id="collapse_{{ }}" class="accordion-body collapse in">
        <div class="accordion-inner">
            <p>Contact: <a href="mailto:{{ obj.get_contact_email }}">{{ obj.get_contact_email }}</a></p>
            {{ obj.description }}
            {% more_like_this obj as related_content limit 5  %}
            {% if related_content %}
                        {% for related in related_content %}
                            <li><a>{{ }}</a></li>
                        {% endfor %}
            {% endif %}
{% endwith %}


To get our type and location facets, we’ll have to add them to a queryset and pass this to a FacetedSearchView instead of the default one. now looks like this:

from django.conf.urls import patterns, include, url
from haystack.forms import FacetedSearchForm
from haystack.query import SearchQuerySet
from haystack.views import FacetedSearchView

sqs = SearchQuerySet().facet('type').facet('location')

urlpatterns = patterns('haystack.views',
    url(r'^$', FacetedSearchView(form_class=FacetedSearchForm, searchqueryset=sqs), name='haystack_search'),

Then, we can use the generated facets in the search template in the facets variable

{% extends 'base.html' %}

{% block hero_text %}Search{% endblock %}
{% block header %}<p>Click around!</p>{% endblock %}

{% block content %}
<div class="span12">
    <form method="get" action="." class=".form-search">
            {{ form.as_table }}
        <input type="submit" value="Search">
        {% if query %}
            <div class="span2">
                {% if facets.fields.type %}
                        {% for type in facets.fields.type %}
                            <li><a href="{{ request.get_full_path }}&amp;selected_facets=type_exact:{{ type.0|urlencode }}">{{ type.0 }}</a> ({{ type.1 }})</li>
                        {% endfor %}
                {% endif %}
                {% if facets.fields.location %}
                        {% for location in facets.fields.location %}
                            <li><a href="{{ request.get_full_path }}&amp;selected_facets=location_exact:{{ location.0|urlencode }}">{{ location.0 }}</a> ({{ location.1 }})</li>
                        {% endfor %}
                {% endif %}
            <div class="span6">
                <div class="accordion" id="accordion2">
                    {% for result in page.object_list %}
                        {% include 'search/_result_object.html' %}
                    {% empty %}
                        <p>No results found.</p>
                    {% endfor %}

                {% if page.has_previous or page.has_next %}
                        {% if page.has_previous %}<a href="?q={{ query }}&amp;page={{ page.previous_page_number }}">{% endif %}&laquo; Previous{% if page.has_previous %}</a>{% endif %}
                        {% if page.has_next %}<a href="?q={{ query }}&amp;page={{ page.next_page_number }}">{% endif %}Next &raquo;{% if page.has_next %}</a>{% endif %}
                {% endif %}
        {% else %}
            <div class="span6">
                {# Show some example queries to run, maybe query syntax, something else? #}
        {% endif %}
{% endblock %}

And we’re done! As I said, check out the haystack documentation for more information. Leave any questions in the comments and I’ll be sure to answer them. Spelling suggestions to come in the next post.