The key to successful SharePoint search strategy is the information architecture. The reality is that once you have a SharePoint implementation established, it’s difficult to go back and retrofit a meaningful taxonomy and effective information architecture. But it’s not impossible. So here are my recommendations for how you can approach this daunting task.

Content quality

The quality of the content is of primary importance. You have to do some configuration of the out-of-the-box Search experience. But it all comes down to the quality of the content.

Start small

Companies have a huge amount of content. If you try to change everything at once, you’ll likely fail because of the enormity of changing the metadata on millions of documents. Plus, going back to figure out the proper taxonomy and tagging is another huge task. So I suggest you start small.

For example, Choose one department, or one project. Just choose something small enough that you can accomplish it, but big enough to showcase afterwards. This way, you can create the proper taxonomy, tagging, information architecture, and everything for this small set of content. Afterwards, you can create the specific search experience for that small set.

As soon as you have the proper metadata, tagging, and information architecture, your search will be good and you can showcase this pilot project. You can demonstrate it to other departments or project groups—or to the rest of your company. After that you can go step by step and eventually improve everything.

Search flow

Determining search flow and how you present it to users is very complex. It’s like playing with Lego. You have lots of considerations to take into account when planning the user interface (UI). It needs to be very well planned.

Let’s say you want to plan out the search experience that provides information about your products. You define how you want to present those products, what kind of information you want to provide, what kind of filtering you want, which systems you want to include, and so on. You still need to think about UI elements like figuring out which Refiners you need and in which order or what kind of presentation you want to use those Refiners. Once you’re done with that, you have to perfect the user experience and UI and how to connect the content in the index. This is a challenge because you have a lot of modules, a lot of tools to use—like Result Sources, Query Rules, Results Blocks, those kinds of things.

User behavior

You have to analyze the behavior of your users and learn how to use their patterns. Look at what searches they do and what searches produce zero results. For example, you might have users searching for “Microsoft CRM” a lot but not finding anything. You’ll realize that you need to include the content that you have on Microsoft CRM because this topic is clearly interesting for your users. This is a search administration issue.

Then you can teach people how to use the search solution. Users need to learn what to do if they can’t find the content they want. They have to know who to contact if they have questions. And the administrator needs to know what problems users might run into and give them the tools they need if they can’t find something.

If users have trouble finding specific content, they might assume it’s not there and maybe recreate or duplicate it. Many factors can make content hard for a user to find—for example, I might not have permission to access the content I want; or the content I’m looking for might be in a different system or site collection that is not in the search results yet. In such cases, I won’t find what I’m looking for. So this is something you have to train people for and teach them how to use search.


Continuous Crawl is a new capability in SharePoint 2013 that is important to consider. In my opinion, Continuous Crawling provides the greatest benefit in a big organization because they have large amounts of content. Let me give an example of the benefits of continuous crawl in contrast with a full crawl.

Imagine an environment in which a full crawl takes something like 3 weeks. That might sound terrible—one crawl takes 3 weeks! But it actually happens a lot if the content source is huge, containing millions or tens of millions of documents. It’s very common.

In such an environment, after a full crawl your index is out of date immediately if you don’t have continuous crawl. Let’s say you start the full crawl today, and you estimate it will end in 3 weeks. During the full crawl, users may have modified some content. Since you’re doing a full crawl, this changed content won’t be updated in the index. So if you have a full crawl process, in 3 weeks it will still contain content that is almost 3 weeks old although users have modified it since then.

With the use of continuous crawl, you can work around this situation. Continuous crawl can run in parallel with full crawl. So let’s say you start the full crawl today and the crawl ends in 3 weeks. If you crawl some of your content and documents today and someone modified a particular document, that document will be updated by the continuous crawl processing in the index. At the end of the 3 weeks’ long full crawl process, you will have an up-to-date index immediately. I think this is the biggest benefit of continuous crawl. But it’s also important to mention that Continuous Crawl works on SharePoint content sources only – not on file shares, web sites or any other types of content sources, due to its implementation and way of working behind the scenes.

The second point I like to make about continuous crawl concerns using continuous crawl and with a content-by-search web part. Content by search web parts are driven by search, so you have to have the content in the index in order to present it to the users. A good way to put content into the index is a continuous crawl. It keeps your index almost up to date. Your index will be very fresh, and your content freshness will be very good if you use continuous crawl. Therefore, if you use content-by-search web parts to aggregate content (e.g., “tasks,” “latest documents,” “latest proposals,” or anything), you will always have fresh content in it and you don’t have to wait hours or days to update it.

Communicating with business users

One thing you have to be aware of is that the role of the business user has changed with SharePoint 2013. This means that you have to adapt your expectations and your approach to take this new role into account.

Communication has always been important, but with SharePoint 2013, SharePoint search administrators have to communicate more with business users and in a different way than they did in the past. In previous version of SharePoint a lot of search functionality was available only if you were an administrator. If some customization was needed, business users were not enabled to do that. Business users and administrators had to communicate about the needs, the requirements—what the business users needed and what the search administrators were able to implement and how.

With the new SharePoint 2013 version, the communication needs have changed because power users can do a lot more. They can do a lot of things without the search administrators. Business users still need help with some things, but they no longer need to ask for a custom solution or a custom web part. What business users need help with now are things like Managed Properties.

Sometimes you really need search administrators for building a whole solution. But in more and more cases today, the administrators just need to provide the proper Lego pieces for the business users.

3 keys to successful search

If I had to recommend 3 things to ensure that you have a successful SharePoint 2013 search implementation here’s what I’d suggest.

First, I recommend doing an inventory. How much content do you have? What kind of content? Where? Who is the owner?

The second thing is a kind of next step after your inventory. You need to plan your metadata, your search metadata, and this has to use your content inventory as the basis. This means, of course, you have to do some kind of inventory on the metadata you have, and you have to collect requirements, the needs and issues for your users, and you have to map them together. Technically, those are the crawled properties and the managed properties. But actually, it’s a lot of planning, a lot of psychology. It’s a lot of research that you have to do. As I said earlier, the quality of the content and the quality of the metadata is the primary key for quality search.

The third thing, of course, is the user experience—how you configure your search pages, Refiners, your display templates, that kind of thing.