All posts in troubleshooting

SharePoint 2013 Crawl Log Error: Index was out of range. Must be non-negative and less than the size of the collection

A customer of mine has installed SharePoint Server 2013 integrated with Project Server 2013. Moreover, they use lots of Search-driven web parts (mini Search Based Apps) to aggregate content across their complex projects.

When using one of these apps, they recognized some of the tasks are missing. Some new tasks as well as some old, despite having Continuous Crawl turned on. Read more…

Time Machines vs. Incremental Crawl

Recently I’ve been working with a customer where my job was to make their SQL based content management system searchable in SharePoint. Nice challenge. One of the best ones was what I call “time machine“.

Imagine a nice, big environment, where a full crawl takes more than 2 weeks. There are several thing during these project where we need full crawl, for example when working with managed properties, etc. But if a full crawl is such a long, it’s always a pain. You know, when you can go even for a holiday while it’s running 😉

We’re getting close to the end of the project, incrementals are scheduled, etc., but turned out there’re some items that have been put into the database nowadays, with some older “last modified date”. How this can happen? With some app for example, or if the users can work offline and upload their docs later (depending on the source system’s capabilities, sometimes these docs get the original time stamp, sometimes the current upload time as “last modified date”). If we have items with linear “last modified dates”, incremental crawls are easy to do, but imagine this sequence:

  1. Full crawl, everything in the database gets crawled.
  2. Item1 has been added, last_modified_date = ‘2013-08-09 12:45:27’
  3. Item2 has been modified, last_modified_date = ‘2013-08-09 12:45:53’
  4. Incremental crawl at ‘2013-08-09- 12:50:00’. Result: Item1 and Item2 crawled.
  5. Item 3 has been added, last_modified_date = ‘2013-08-09 12:58:02’
  6. Incremental crawl at ‘2013-08-09- 13:00:00’. Result: Item3 crawled.
  7. Item4 has been added by an external tool, last_modified_date = ‘2013-08-09 12:45:00’.
    Note that this time stamp is earlier than the previous crawl’s time.
  8. Incremental crawl at ‘2013-08-09- 13:10:00’. Result: nothing gets crawled.

The reason is: Item4’s last_modified_date time stamp is older than the previous crawl, and the crawler suppose every change got happened since that (i.e. no time machine built-in to the backend 😉 ).

What to do now?

First option is: Full crawl. But:

  1. If Full crawl takes more than 2 weeks, it’s not always an option. We have to avoid is if possible.
  2. We can suppose, the very same can happen anytime in the future, i.e. docs appering from the past, even before the last crawl time. And Full crawl not an option, see #1.

Obviously, customer would like to see these “time travelling” items in the search results as well, but looks like neither full nor incremental crawl is an option.

But, consider this idea: what if we could trick the incremental to think the previous crawl was not 10 minutes ago but a month (or two, or a year, depending on how old docs we can expect to appear newly in the database)? In this case, incremental crawl would not check the new/modified items since the last incremental, but for a month (or two, or a year, etc.) back. Time machine, you know… 😉

Guess what? – It’s possible. The solution is not official, not supported, but works. The “only” thing you have to do is modifying the proper time stamps in the MSSCrawlURL table, something like this:

Why? – Because the crawler determines the “last crawl time” by this table. If you trick the time stamps back, the crawler thinks the previous crawl was too long ago and goes back in time, to get the changes from that, longer period. And in this case, without doing a full crawl, you’ll get every item indexed, even the “time travelling” ones from the past.

Ps. Same can be done if you have last_modified_date values in the future. The best docs from the future I’ve seen so far were created in 2127…

v:* {behavior:url(#default#VML);}
o:* {behavior:url(#default#VML);}
w:* {behavior:url(#default#VML);}
.shape {behavior:url(#default#VML);}

Normal
0

false
false
false

EN-US
X-NONE
X-NONE

MicrosoftInternetExplorer4

/* Style Definitions */
table.MsoNormalTable
{mso-style-name:”Table Normal”;
mso-tstyle-rowband-size:0;
mso-tstyle-colband-size:0;
mso-style-noshow:yes;
mso-style-priority:99;
mso-style-parent:””;
mso-padding-alt:0in 5.4pt 0in 5.4pt;
mso-para-margin:0in;
mso-para-margin-bottom:.0001pt;
mso-pagination:widow-orphan;
font-size:10.0pt;
font-family:”Times New Roman”,”serif”;}

The problem in this case is that as soon as you crawl any of these, crawler considers 2127 as the last crawl’s year, and nothing created before (in the present) will get crawled by any upcoming incrementals. Until 2127, of course 😉


Related Posts:

 

Reduce Resources Used by noderunner.exe in SharePoint 2013

My Search Troubleshooting session is one of the most popular ones, and definitely one of my favorites. I’ve been working on its use cases from conference to conference, built on my experiences as well as on the questions got from attendees.

One of the most common questions is definitely about the resources used by Search. Using Continuous Crawl and/or many crawl processes and/or frequent crawl schedules and/or big change sets in the content sources, etc. – all makes the resources consumed by Search higher and higher. Besides the most known optimization and scale-out techniques, I get the question very often: how the resources consumed by noderunner.exe can be limited?

The first part of my answer is: Please, do plan your resources in production! Search is often seen as “something” that works “behind the scenes”, but it can make some very bad surprises… I’m sure you don’t want to see your production farm consuming 99% of the available memory while crawling… So please, plan first. Here’s some help for this: Scale search for performance and availability in SharePoint Server 2013.

But you might have some dev or demo environments, for sure, where you cannot have more than one crawlers, running on the App server. It’s not recommended in production but absolutely reasonable on a dev farm. But I’m sure you don’t want to go to drink a coffee or to have a lunch every time when you run a crawling either… So here are some NOT SUPPORTED and NOT RECOMMENDED tips, for your dev/demo environment ONLY!! Don’t do any of them on production, please!!!

  1. The first one is the easier, and “not-that-bad” configuration: you should simply set the Performance Level of Search to Reduced (or PartiallyReduced, if you don’t want to be so rigorous): Set-SPEnterpriseSearchService -PerformanceLevel Reduced
  2. Second one is the one that is strictly not supported and not recommended. Please don’t tell Microsoft you’ve heard this from me 😉 And once again: it’s for dev/demo environments ONLY!!!

    As you know, several search services run as noderunner.exe on the server(s). They can consume a LOT of memory, but the good news is: you can limit their memory usage! The magic is: go to the folder C:Program FilesMicrosoft Office Servers15.0SearchRuntime1.0 and look for a line like this:

    <nodeRunnerSettings memoryLimitMegabytes=”0″ />

    The zero means “unlimited” here. The only thing to do is to set to the amount of RAM you’d like to set as a limit for each noderunner.exe processes.

NOTE: Be aware, these settings might make your crawling processes slower!!


Related Posts:

More about Search DBs – Inconsistency due to an Error while Creating the SSA

After my blog post (Deploying Search Service App by PowerShell – but what about the Search DBs?) a couple of days ago, I’ve got an interesting question: what to do with a Search Service App that cannot get removed but cannot be used either as there’re no DBs created for it.

How come?

The reason was pretty simple: some DB issue occurred while creating the Search Service Application, and it got to be inconsistent: the SSA seemed to get created on Central Administration, but it was weird as the SSA Proxy wasn’t displayed at all:

When checked the DBs, they were not created either.

All right, after the SQL issues have been fixed, let’s delete the SSA and create it again. But when tried to delete, here is an error you get, even if you don’t check the “Delete data associated with the Service Applications” option:

Yeah, the Search Service Application cannot get deleted if its DB does not exist. And PowerShell doesn’t help in this situation either as it gives you the very same error.

After several rounds of trials here is how I could solve this problem out: Create a SSA with a different name but with the *very same* DB name. (Use PowerShell for this, see my post earlier.)After this step, you’ll be able to delete your previous SSA, but don’t forget NOT to use the option “Delete data associated with the Service Applications” as you’ll need the DBs in order to be able to delete your second, temporary SSA 😉

Ps. Since this experience, I’ve been playing with this on my VM, using several use cases. Turned out the same issue might happen if there’s a SQL issue while deleting the Search SSA.


Related Posts:

 

No Document Preview Displayed on the Hover Panel (SharePoint 2013 Search)

Recently, I met a SharePoint 2013 environment where everything was installed and configured, including Office Web Apps (WAC) but we still couldn’t get displayed the document previews on the Hover Panel in Search results. Result pages were something like this, for each file type:

No Preview on Hover Panel

IF everything is configured correctly, the solution is pretty easy. Just click on the search result to open the document by WAC.

IF it’s successful, go back, refresh, and the previews on Hover Panel should be there. Looks like WAC requires some kind of “warm up”:

 

How to Use Developer Dashboard in SharePoint 2013 Search Debugging and Troubleshooting

First of all, Happy New Year in 2013! This time I’m sitting in my home office, seeing snow everywhere out the window:

When I’m not outside to enjoy the fun and beauty of winter, I’m mostly getting prepared for my upcoming workshops and conference sessions, like for the European SharePoint Conference in a few weeks where I have a full day Search Workshop as well as three breakout sessions.

During my SharePoint 2013 Search sessions, one of my favorite topic and demo is the new Developer Dashboard. Let me show you a use case where I can demonstrate its power while debugging and troubleshooting Search.

Let’s say, your end users start complaining that Search Center is down. When you ask them what they mean on this their response is like “Something went wrong…”You know, they can see this on the UI:

You have two options here. First, you can start guessing OR you can turn on the Developer Dashboard and make real, “scientific” debugging. Moving forward on this way, turn on the Developer Dashboard by using PowerShell:

$content = ([Microsoft.SharePoint.Administration.SPWebService]::ContentService)

$appsetting =$content.DeveloperDashboardSettings

$appsetting.DisplayLevel = [Microsoft.SharePoint.Administration.SPDeveloperDashboardLevel]::On

$appsetting.Update()

Going to your SharePoint site, you’ll notice a new, small icon in the top right corner (supposing the default master page) which you can open the Developer Dashboard with:

Anyway, going to your Search Center, you cannot see this icon at all:

This can be confusing, of course, but if you open the Developer Dashboard either before you navigate to the Search Center OR from the Site Settings of the Search Center, you’ll get it working and you can get its benefits.

When the Developer Dashboard gets opened, you’ll notice that it’s a new browser window in SharePoint 2013 and it has several TABs for several information to display:

  • Server Info
  • Scopes
  • SQL
  • SPRequests
  • Asserts
  • Service Calls
  • ULS
  • Cache Calls

After opening and navigating (back) to the Search Center, simply run a query, then refresh the Developer Dashboard. The result you’ll get is something like this:

Well, it’s a LOT of information you get here, isn’t it? In this example, just simple go to the ULS tab (of course, you can crawl the others and look around what you get there too) and search (Ctrl + F) either for “error” or more specifically for the Correlation ID seen on the Search Center. Either way, you’ll get find an error message like this:

Error occured: System.ServiceModel.EndpointNotFoundException: Could not connect to net.tcp://trickyaggie/39B739/QueryProcessingComponent1/ImsQueryInternal. The connection attempt lasted for a time span of 00:00:02.1383645. TCP error code 10061: No connection could be made because the target machine actively refused it.

Woot! QueryProcessingComponent1 throws an EndpointNotFoundException. Let’s go to the QueryProcessingComponent1and check its status (is the server running?), its network connection as well as the services running on it. There’s a service called “SharePoint Search Host Controller” that should be started – if it’s stopped, just simply start it and check the Search Center again.

Of course, this is only a small example of using Developer Dashboard but I’m confident the more you’ll play with it the more you’ll value its capabilities and power.

I really would like to learn more about your experiences of Developer Dashboard. Please, share in a comment here or by emailing me. Thanks!

 

Four Tips for Index Cleaning

If you’ve ever had fun with SharePoint Search, most likely you’ve seen (or even used) Index Reset there. This is very useful if you want to clear everything from your SharePoint index – but sometimes it’s not good enough:

  1. If you don’t want to clean the full index but one Content Source only.
  2. If you have FAST Search for SharePoint 2010.
  3. Both 🙂

1. Cleaning up one Content Source only

Sometimes you have too much content crawled, but need to clear one Content Source. In this case, clearing everything might be very painful – imagine to clear millions of documents, then crawling everything that should not have been cleaned…

Instead, why not cleaning one Content Source only?

It’s much easier than it seems to be:

  1. Open your existing Content Source.
  2. Check if there’s no crawl running on this Content Source. The status of the Content Source has to be Idle. If not, Stop the current crawl and wait until it gets done.
  3. Remove all Start Addresses from your Content Source (don’t forget to note them before clearing!).
  4. Wait until the index gets cleaned up.(*)
  5. Add back the Start Addresses (URLs) to your Content Source, and Save your settings..
  6. Enjoy!

With this, you’ll be able to clear only one Content Source.

Of course, you can use either the UI of SSA in Central Administration or PowerShell, the logic is the same. Here is a simple PowerShell script for removing the Start Addresses:

$contentSSA = “FAST Content SSA” $sourceName = “MyContentSource”

$source = Get-SPEnterpriseSearchCrawlContentSource -Identity $sourceName     -SearchApplication $contentSSA $URLs = $source.StartAddresses | ForEach-Object { $_.OriginalString }

$source.StartAddresses.Clear()

Then, as soon as you’re sure the Index has been cleaned up(*), you can add back the Start Addresses, by this command:

 

ForEach ($address in $URLs){ $source.StartAddresses.Add($address) }

2. Index Reset in FAST Search for SharePoint

You most likely know Index Reset on the Search Service Application UI:

image

Well, in case of you’re using FAST Search for SharePoint 2010 (FS4SP), it’s not enough. Steps for making a real Index Reset are the followings:

  1. Make an Index Reset on the SSA, see the screenshot above.
  2. Open FS4SP PowerShell Management on the FAST Server, as a FAST Admin.
  3. Run the following command: Clear-FASTSearchContentCollection –Name <yourContentCollection>. The full list of available parameters can be found here. This deletes all items from the content collection, without removing the collection itself.

3. Cleaning up one Content Source only in FAST Search for SharePoint

Steps are the same as in case of SharePoint Search, see above.

4. Checking the status of your Index

In the Step #4 for above (*), I’ve mentioned you should wait until the index gets cleaned up, and it always takes time.

First place where you can go is the the SSA, there is a number that is a very good indicator:

Searchable Items

In case of FS4SP, you should use PowerShell again, after running the Clear-FASTSearchContentCollection command:

  1. Open FS4SP PowerShell Management on the FAST Server, as a FAST Admin.
  2. Run the following command: Get-FASTSearchContentCollection –Name <yourContentCollection>. The result is containing several information, including DocumentCount:

How to check the clean-up process with this?

First option: if you know how many items should be cleaned, just check the DocumentCount before you clean the Content Source, and regularly afterwards. If the value of DocumentCount is around the value you’re expecting AND not decreasing anymore, you’re done.

Second option: if you don’t know how many items will be cleared, just check the value of DocumentCount regularly, like in every five minutes. If this value stopped decreasing AND doesn’t get decreased for a while (eg. for three times you’re checking), you’re done.

As soon as you’re done, you can add back the Start Addresses to your Content Source, as mentioned above.

 

Debugging and Troubleshooting the Search UI

Recently, I have been giving several Search presentations, and some of them were focusing on Crawled and Managed Properties. In this post, I’m focusing on the User Experience part of this story, especially on the Debugging and Troubleshooting.

As you know, our content might have one (or more) unstructured part(s) and some structured metadata, properties. When we crawl the content, we extract these properties – these are the crawled properties. And based on the crawled properties, we can create managed properties.

Managed Properties in a Nutshell

Managed Properties are controlled and managed by the Search Admins. You can create them mapped to one or more Crawled Properties.

For example, let’s say your company has different content coming from different source systems. Office documents, emails, database entries, etc. stored in SharePoint, File System, Exchange or Lotus Notes  mailboxes, Documentum repositories, etc. For each content, there’s someone who created that, right? But the name of this property might be different in the several systems and/or for the several document types. For Office Documents, it might be Author, Created By, Owner, etc. For emails, usually it’s called From.

At this point, we have several different Crawled Properties, used for the same thing: tag the creator of the content. Why don’t display this in a common way for you, the End User? For example, we can create a Managed Property called ContentAuthor and map each of the Crawled Properties above to this (Author, Created By, Owner, From, etc.). With this, we’ll be able to use this properties in a common way on the UI: display on the Core Results Web Part, use as Refiner, or as a Sorting value in case of FAST.

(NOTE: Of course, you can use each Crawled Property for more than one Managed Properties.)

On the Search UI

If you check a typical SharePoint Search UI, you can find the Managed Properties in several ways:

Customized Search UI in SharePoint 2010 (with FS4SP)

1. Refiners – Refiners can be created by the Managed Properties. You can define several refiner types (text, numeric range, date range, etc.) by customizing this Web Part’s Filter Category Definition property. There are several articles and blog posts describing how to do this, one of my favorite one is this one by John Ross.

2. Search Result Properties – The out-of-the-box Search Result Set is something like this:

OOTB Search Results

This UI contains some basic information about your content, but I’ve never seen any environment where it should have not been customized more or less. Like the first screenshot, above. You can include the Managed Properties you want, and you can customize the way of displaying them too. For this, you’ll have to edit some XMLs and XSLTs, see below…

3. Property-based Actions – If you can customize the UI of properties on the Core Result Web Part, why don’t assign some actions to them? For example, a link to a related item. A link to more details. A link to the customer dashboard. Anything that has a (parameterized) URL and has business value to your Search Users…

4. Scopes and Tabs – Search Properties can be used for creating Scopes, and each scope can have its own Tab on the Search UI.

Core Result Web Part – Fetched Properties

If you want to add some managed properties to the Search UI, the first step is adding this property to the Fetched Properties. This field is a bit tricky though:

Fetched Properties

Edit the Page, open the Core Result Web Part’s properties, and expand the Display Properties. Here, you’ll see the field for Fetched Properties. Take a deep breath, and try to edit it – yes, it’s a single-line, crazy long XML. No, don’t try to copy and paste this by your favorite XML editor, because if you do and break it to lines, tabs, etc. and try to copy back here, you’ll have another surprise – this is really a single line text editor control. If you paste here a multi-line XML, you’ll get the first line only…

Instead, copy the content of this to the clipboard and paste to Notepad++ (this is a free text editor tool, and really… this is a Notepad++ :)). It seems like this:

Fetched Properties in Notepad++

Open the Language menu and select XML. Your XML will be still one-line, but at least, formatted.

Open the Plugins / XML Tools / Pretty Print (XML only – with line breaks) menu, and here you go! Here is your well formatted, nice Fetched Properties XML:

Notepad++ XML Tools Pretty print (XML Only - with line breaks)

So, you can enter your Managed Properties, by using the Column tag:

<Column Name=”ContentAuthor”/>

Ok, you’re done with editing, but as I’ve mentioned, it’s not a good idea to copy this multi-line XML and paste to the Fetched Properties field of the Core Results Web Part. Instead, use the Linarize XML menu of the XML Tools in Notepad++, and your XML will be one loooooooooong line immediately. From this point, it’s really an easy copy-paste again. Do you like it? 🙂

NOTES about the Fetched Properties:

  • If you enter a property name that doesn’t exist, this error message will be displayed:

Property doesn't exist or is used in a manner inconsistent with schema settings.

  • You’ll get the same(!) error if you enter the same property name more than once.
  • Also, you’ll get the same error if you enter some invalid property names to the Refinement Panel Web Part!

Debugging the Property Values

Once you’ve entered the proper Managed Property names to the Fetched Property field, technically, you’re ready to use them. But first, you should be able to check their values without too much effort. Matthew McDermott has published a very nice way to do this: use an empty XSL on the Core Results Web Part, so that you’ll get the plain XML results. You can find the full description here.

In summary: if you create a Managed Property AND add it to the Fetched Properties, you’re ready to display (and use) it on the Result Set. For debugging the property values, I always create a test page with Matthew’s empty XSL, and begin to work with the UI customization only afterwards.

Enjoy!

 

Why are some Refiner values hidden?

Refiners are cool either if you use SharePoint Search or FAST, it’s not a question. I very like them, they give so much options and power to the end users.

But there’s a very common question around them: the deep and shallow behavior. As you know the definitions very well: FAST Search for SharePoint has deep refiners, that means each result in the result set is processed and used when calculating the refiners. And SharePoint Search uses shallow refiners, where the refiner values are calculated from the first 50 results only.

These definitions are easy, right? But let’s think a bit forward, and try to answer the question that pops up at almost every conference: Why some Refiner values are not visible when searching? Moreover: why they’re visible when running Query1 and hidden when running Query2?

For example: let’s say you have a lot of documents crawled, and you enter a query where the result set contains many-many items. Thousands, tens of thousands or even more.

Let’s say you have some Excel workbook in the result set that might be relevant for you, but this Excel file is not boosted in the result set at all, let’s say the first Excel result is on the 51th position (you have a lot of Word, PowerPoint, PDF, etc. files on the positions 1-50).

What happens if you use FAST Search? – As the refiners are deep, each result will be processed, so your Excel workbook. For example, in the Result Type refiner you’ll see all the Word, PowerPoint, PDF file types as well as the Excel. Easy way, you can click on the Excel refiner and you’ll get what you’re looking for immediately.

image

But what’s the case if you don’t have FAST Search, only the SharePoint one? – As the first 50 results is processed for the refiner calculation, your Excel workbook won’t be included. This means, the Result Type refiner displays the Word, PowerPoint, PDF refiners but doesn’t display the Excel at all, as your Excel file is not amongst the top results. You’ll see the Result Type refiner as if it there wasn’t any Excel result at all!

image

Conclusion: the difference between the shallow and deep refiners doesn’t seem to be so much important for the first sight. But you have to be aware there’s a huge difference in a real production environment as you and your users might have some hidden refiners, and sometimes it’s hard to understand why.

In other words, if a refiner value shows up on your Refinement Panel, that means:

  • In case of FAST Search for SharePoint (deep refiner): There’s at least one item matching this refiner value in the whole result set. Exact number of the items match the refiner value is included.
  • In case of SharePoint Search (shallow refiner): There’s at least one item matching this refiner value in the first 50 results.

If you cannot see a specific value on the Refiner Panel, that means:

  • In case of FAST Search for SharePoint (deep refiner): There’s no result matching this refiner value at all.
  • In case of SharePoint Search (shallow refiner): There’s no result matching this refiner in the first 50 results.

 

SP2010 SP1 issues – Config Wizard!

Recently, I have seen several issues after installing SharePoint 2010 SP1. The fix is very easy, but first let me describe the symptoms I have seen.

1. The search application ‘Search Service Application’ on server MYSERVER did not finish loading. View the event logs on the affected server for more information.

This error appears on the Search Service Application. Event Log on the server contains tons of errors like this:

Log Name:      Application

Source:        Microsoft-SharePoint Products-SharePoint Server

Date:          7/19/2011 5:01:22 PM

Event ID:      6481

Task Category: Shared Services

Level:         Error

Keywords:     

User:          MYDOMAINsvcuser

Computer:      myserver.mydomain.local

Description:

Application Server job failed for service instance Microsoft.Office.Server.Search.Administration.SearchServiceInstance (898667e4-126e-45d2-bb52-43f613669084).

Reason: The device is not ready.

Technical Support Details:

System.IO.FileNotFoundException: The device is not ready. 

   at Microsoft.Office.Server.Search.Administration.SearchServiceInstance.Synchronize()

   at Microsoft.Office.Server.Administration.ApplicationServerJob.ProvisionLocalSharedServiceInstances(Boolean isAdministrationServiceJob)

2. On a different server, the SharePoint needed to be upgraded from Standard to Enterprise (after SP1 has just been installed). After entering the license key, we got an Unsuccessful error on the User Interface, and this in the Event Log:

The synchronization operation of the search component: d289abde-9641-46d0-8d32-0345f1885704 associated to the search application: Search Service Application on server: BAIQATEST01 has failed. The component version is not compatible with the search database: Search_Service_Application_CrawlStoreDB_a737e7614f034544a8c1da6fe4a24f7b on server: BAIQATEST01. The possible cause for this failure is The database schema version is less than the minimum backwards compatibility schema version that is supported for this component. To resolve this problem upgrade this database.

The reason of these errors were the same in both cases: SharePoint 2010 Config Wizard should have been run after installing SP1. Once you run the Config Wizard, these errors disappear and everything starts to work fine – again!

How to check the Crawl Status of a Content Source

As you know I’m playing working with SharePoint/FAST Search a lot. I have a lot of tasks when I have to sit on the button F5 while crawling and check the status: is it started? is it still crawling? is it finished yet?…

I have to hit F5 in every minute. I’m too lazy, so decided to write a PowerShell script that does nothing but checking the crawl status of a Content Source and writes it to the console to me. And I can work on my second screen while it’s working and working and working – without touching F5.

The script is pretty easy:

$SSA = Get-SPEnterpriseSearchServiceApplication -Identity “Search Service Application” $ContentSource = $SSA | Get-SPEnterpriseSearchCrawlContentSource -Identity “My Content Source”

do {     Write-Host $ContentSource.CrawlState (Get-Date).ToString() “-” $ContentSource.SuccessCount “/” $ContentSource.WarningCount “/” $ContentSource.ErrorCount     Start-Sleep 5 } while (1)

Yes, it works fine for FAST (FS4SP) Content Sources too.

Troubleshooting: FAST Admin DB

The environment:

A farm with three servers: SharePoint 2010 (all rules), FAST Admin, FAST non-admin. SQL is on the SharePoint box too.

The story:

Recently, I had to reinstall the box with SP2010 and SQL. Everything seemed to be fine: installing SQL, SP2010, configuring FAST Content and Query Service Apps, crawl, search… It was like a dream, almost unbelievable. But after that I started to get an error on BA Insight Longitude Connectors admin site, when I started to play with the metadata properties: Exception configuring search settings: … An error occurred while connecting to or communicating with the database…

I went to the FAST Query / FAST Search Administration / Managed Properties, and got this error: Unexpected error occurred while communicating with Administration Service

Of course, I went to the SQL Server’s event log, where I found this error: Login failed for user ‘MYDOMAINsvc_user’. Reason: Failed to open the explicitly specified database On the Details tab I could see the ‘master’ as the related DB.

I went to SQL Server Profiler, but the Trace told the same.

Of course, I checked everything around FAST: the user was in the FASTSearchAdministrators group, permission settings were correct on SQL, etc.

Finally, I found what I was looking for: Event Log on the FAST admin server contained this error: System.Data.SqlClient.SqlException: Cannot open database “FASTSearchAdminDatabase” requested by the login. The login failed. Login failed for user ‘MYDOMAINsvc_user’

The solutions:

Yes, it was what I was looking for: I really forgot to restore the FASTSearchAdminDatabase. But what to do if you don’t have a backup about that?

Never mind, here is the Powershell command for you:

Install-FASTSearchAdminDatabase -DbServer SQLServer.mydomain.local -DbName FASTSearchAdminDatabase

Voilá, it’s working again! 🙂

PowerShell script for exporting Crawled Properties (FS4SP)

Recently, I was working with FAST Search Server 2010 for SharePoint (FS4SP) and had to provide a list of all crawled properties in the category MyCategory. Here is my pretty simple script that provides the list in a .CSV file:

$outputfile = “CrawledProperties.csv”

if (Test-Path $outputfile) { clear-content $outputfile }

foreach ($crawledproperty in (Get-FASTSearchMetadataCrawledProperty)) {     $category = $crawledproperty.categoryName     if ($category = “MyCategory”)     {

# Get the name and type of the crawled property         $name = $crawledproperty.name         $type = $crawledproperty.VariantType

switch ($type) {             20 {$typestr = “Integer”}             31 {$typestr = “Text”}             11 {$typestr = “Boolean”}             64 {$typestr = “DateTime”}             default {$typestr = other}

}         # Build the output: $name and $typestr separated by           $msg = $name + ” ” + $typestr

Write-output $msg | out-file $outputfile -append     } }

$msg = “Crawled properties have been exported to the file ” + $outputfile write-output “” write-output $msg write-output “” write-output “”

How to Test your FAST Search Deployment?

Recently I’ve made a farm setup where SharePoint 2010 (SP2010) and FAST Search Server 2010 for SharePoint (F4SP) had to be installed to separated boxes. After a successfully installation there’s always useful to make some testing before indexing the production content sources. In case of F4SP it’s much more easier than you’d think.

First, you have to push some content to the content collection. Follow these steps:

  1. Create a new document anywhere on your local machine, for example C:FAST_test.txt
  2. Fill some content into this document, for example: Hello world, this is my FAST Test doc.
  3. Save the document.
  4. Run the Microsoft FAST Search Server 2010 for SharePoint shell.
  5. Run the following command: docpush -c <collection name> “<fullpath to a file>” (in my case: docpush –c sp C:FAST_test.txt) (See the full docpush reference here.)

If this command run successfully, your document has been pushed to the FAST content collection and can be queried. Next step should be to test some queries:

  1. Open a browser on the FAST server and go to http://localhost:[base_port+280]. In case you used the default base port number (13000) you should go to http://localhost:13280. This is the FAST Query Language (FQL) test page, so you can make some testing directly here.
  2. Search for a word contained in the document you’ve uploaded (C:FAST_test.txt). For example, search for the word ‘world’ or ‘FAST’. The result set should contain your document uploaded to the content collection before.
  3. Also, you can set some other parameters on the FQL testing page, for example language setting, debug info, etc.

FAST Search Query Language (FQL) test page

But this site (http://localhost:13280) is much more than a simple FQL testing page. On the top navigation there are other useful functions too:

  • Log
  • Configuration
  • Control
  • Statistics
  • Exclusion List
  • Reference

I’ll deep into these functions in a post later. Stay tuned!

 

Hunting for a mystic error during SharePoint 2010 installation

Lately I’ve been installing SharePoint 2010 Server for a customer. It was a little tricky, as the server didn’t have Internet access, so I had to do all of downloads manually. (Fortunately I’ve found this PowerShell script, so this manual work wasn’t so bad.)
After a smooth installation of prerequisites, I tried to install the SharePoint 2010 Server in Farm mode, but after a few minutes got the following error: Error writing to file: Microsoft.PerformancePoint.Scorecards.Script.dll. Verify that you have access to that directory. Of course, I was administrator on that box with Windows Server R2 x64.
Error writing to file: Microsoft.PerformancePoint.Scorecards.Script.dll. Verify that you have access to that directory.
I tried again and again, the result was the same error. I even turned UAC off, but didn’t help.
Neither Event log nor log files contained anything useful only the same error message as the screenshot above.
Procmon also didn’t give anything useful at all.
When I was looking for the Microsoft.PerformancePoint.Scorecards.Script.dll found out that it had been saved to a temp folder on the server during the setup process, so I could try to register it manually but no success again. It was the weirdest error message I’ve seen:
The module "Microsoft.PerformancePoint.Scorecards.Script.dll" may not compatible with the version of Windows that you're running. Check if the module is compatible with an x86 (32-bit) or x64 (64-bit) version of regsvr32.exe.
Finally, I didn’t have better idea than download the install files again and upload them to the server. And guess what: it was working, and the installation was successful without any issue.