All posts tagged crawler

SharePoint 2016 Office 365 Search in Practice workshop New York City, NY

Workshop: SharePoint 2016 and Office 365 Search in Practice – New York City, May 2017

The next stop of Search Explained Roadshow 2017 is New York City, NY!

Date: May 15-16, 2017
Venue: Avanade Innovation Center, 155 Avenue of the Americas, 6th floor, New York, NY

Read more…

Time Machines vs. Incremental Crawl

Recently I’ve been working with a customer where my job was to make their SQL based content management system searchable in SharePoint. Nice challenge. One of the best ones was what I call “time machine“.

Imagine a nice, big environment, where a full crawl takes more than 2 weeks. There are several thing during these project where we need full crawl, for example when working with managed properties, etc. But if a full crawl is such a long, it’s always a pain. You know, when you can go even for a holiday while it’s running 😉

We’re getting close to the end of the project, incrementals are scheduled, etc., but turned out there’re some items that have been put into the database nowadays, with some older “last modified date”. How this can happen? With some app for example, or if the users can work offline and upload their docs later (depending on the source system’s capabilities, sometimes these docs get the original time stamp, sometimes the current upload time as “last modified date”). If we have items with linear “last modified dates”, incremental crawls are easy to do, but imagine this sequence:

  1. Full crawl, everything in the database gets crawled.
  2. Item1 has been added, last_modified_date = ‘2013-08-09 12:45:27’
  3. Item2 has been modified, last_modified_date = ‘2013-08-09 12:45:53’
  4. Incremental crawl at ‘2013-08-09- 12:50:00’. Result: Item1 and Item2 crawled.
  5. Item 3 has been added, last_modified_date = ‘2013-08-09 12:58:02’
  6. Incremental crawl at ‘2013-08-09- 13:00:00’. Result: Item3 crawled.
  7. Item4 has been added by an external tool, last_modified_date = ‘2013-08-09 12:45:00’.
    Note that this time stamp is earlier than the previous crawl’s time.
  8. Incremental crawl at ‘2013-08-09- 13:10:00’. Result: nothing gets crawled.

The reason is: Item4’s last_modified_date time stamp is older than the previous crawl, and the crawler suppose every change got happened since that (i.e. no time machine built-in to the backend 😉 ).

What to do now?

First option is: Full crawl. But:

  1. If Full crawl takes more than 2 weeks, it’s not always an option. We have to avoid is if possible.
  2. We can suppose, the very same can happen anytime in the future, i.e. docs appering from the past, even before the last crawl time. And Full crawl not an option, see #1.

Obviously, customer would like to see these “time travelling” items in the search results as well, but looks like neither full nor incremental crawl is an option.

But, consider this idea: what if we could trick the incremental to think the previous crawl was not 10 minutes ago but a month (or two, or a year, depending on how old docs we can expect to appear newly in the database)? In this case, incremental crawl would not check the new/modified items since the last incremental, but for a month (or two, or a year, etc.) back. Time machine, you know… 😉

Guess what? – It’s possible. The solution is not official, not supported, but works. The “only” thing you have to do is modifying the proper time stamps in the MSSCrawlURL table, something like this:

Why? – Because the crawler determines the “last crawl time” by this table. If you trick the time stamps back, the crawler thinks the previous crawl was too long ago and goes back in time, to get the changes from that, longer period. And in this case, without doing a full crawl, you’ll get every item indexed, even the “time travelling” ones from the past.

Ps. Same can be done if you have last_modified_date values in the future. The best docs from the future I’ve seen so far were created in 2127…

v:* {behavior:url(#default#VML);}
o:* {behavior:url(#default#VML);}
w:* {behavior:url(#default#VML);}
.shape {behavior:url(#default#VML);}

Normal
0

false
false
false

EN-US
X-NONE
X-NONE

MicrosoftInternetExplorer4

/* Style Definitions */
table.MsoNormalTable
{mso-style-name:”Table Normal”;
mso-tstyle-rowband-size:0;
mso-tstyle-colband-size:0;
mso-style-noshow:yes;
mso-style-priority:99;
mso-style-parent:””;
mso-padding-alt:0in 5.4pt 0in 5.4pt;
mso-para-margin:0in;
mso-para-margin-bottom:.0001pt;
mso-pagination:widow-orphan;
font-size:10.0pt;
font-family:”Times New Roman”,”serif”;}

The problem in this case is that as soon as you crawl any of these, crawler considers 2127 as the last crawl’s year, and nothing created before (in the present) will get crawled by any upcoming incrementals. Until 2127, of course 😉


Related Posts:

 

Four Tips for Index Cleaning

If you’ve ever had fun with SharePoint Search, most likely you’ve seen (or even used) Index Reset there. This is very useful if you want to clear everything from your SharePoint index – but sometimes it’s not good enough:

  1. If you don’t want to clean the full index but one Content Source only.
  2. If you have FAST Search for SharePoint 2010.
  3. Both 🙂

1. Cleaning up one Content Source only

Sometimes you have too much content crawled, but need to clear one Content Source. In this case, clearing everything might be very painful – imagine to clear millions of documents, then crawling everything that should not have been cleaned…

Instead, why not cleaning one Content Source only?

It’s much easier than it seems to be:

  1. Open your existing Content Source.
  2. Check if there’s no crawl running on this Content Source. The status of the Content Source has to be Idle. If not, Stop the current crawl and wait until it gets done.
  3. Remove all Start Addresses from your Content Source (don’t forget to note them before clearing!).
  4. Wait until the index gets cleaned up.(*)
  5. Add back the Start Addresses (URLs) to your Content Source, and Save your settings..
  6. Enjoy!

With this, you’ll be able to clear only one Content Source.

Of course, you can use either the UI of SSA in Central Administration or PowerShell, the logic is the same. Here is a simple PowerShell script for removing the Start Addresses:

$contentSSA = “FAST Content SSA” $sourceName = “MyContentSource”

$source = Get-SPEnterpriseSearchCrawlContentSource -Identity $sourceName     -SearchApplication $contentSSA $URLs = $source.StartAddresses | ForEach-Object { $_.OriginalString }

$source.StartAddresses.Clear()

Then, as soon as you’re sure the Index has been cleaned up(*), you can add back the Start Addresses, by this command:

 

ForEach ($address in $URLs){ $source.StartAddresses.Add($address) }

2. Index Reset in FAST Search for SharePoint

You most likely know Index Reset on the Search Service Application UI:

image

Well, in case of you’re using FAST Search for SharePoint 2010 (FS4SP), it’s not enough. Steps for making a real Index Reset are the followings:

  1. Make an Index Reset on the SSA, see the screenshot above.
  2. Open FS4SP PowerShell Management on the FAST Server, as a FAST Admin.
  3. Run the following command: Clear-FASTSearchContentCollection –Name <yourContentCollection>. The full list of available parameters can be found here. This deletes all items from the content collection, without removing the collection itself.

3. Cleaning up one Content Source only in FAST Search for SharePoint

Steps are the same as in case of SharePoint Search, see above.

4. Checking the status of your Index

In the Step #4 for above (*), I’ve mentioned you should wait until the index gets cleaned up, and it always takes time.

First place where you can go is the the SSA, there is a number that is a very good indicator:

Searchable Items

In case of FS4SP, you should use PowerShell again, after running the Clear-FASTSearchContentCollection command:

  1. Open FS4SP PowerShell Management on the FAST Server, as a FAST Admin.
  2. Run the following command: Get-FASTSearchContentCollection –Name <yourContentCollection>. The result is containing several information, including DocumentCount:

How to check the clean-up process with this?

First option: if you know how many items should be cleaned, just check the DocumentCount before you clean the Content Source, and regularly afterwards. If the value of DocumentCount is around the value you’re expecting AND not decreasing anymore, you’re done.

Second option: if you don’t know how many items will be cleared, just check the value of DocumentCount regularly, like in every five minutes. If this value stopped decreasing AND doesn’t get decreased for a while (eg. for three times you’re checking), you’re done.

As soon as you’re done, you can add back the Start Addresses to your Content Source, as mentioned above.

 

Event-Driven Crawl Schedule

Recently I’ve been working for a customer where I’ve found some interesting requirements: they had several content sources and wanted to crawl them one by one after each other. Scheduling the incrementals for fix time was not a good solution as their content incrementals were very hectic: incremental crawl for the same content source took 5 min at one time, then 1.5 hours next time. And of course, they didn’t want idle time.

But we cannot define these kind of rules from the UI, so the ultimate solution was PowerShell.

First, we need to be able to start the crawl. Let’s talk about Incremental Crawl only this time. Here is the PowerShell script for this:

$SSA = Get-SPEnterpriseSearchServiceApplication -Identity “Search Service Application”

$ContentSourceName = My Content Source

$ContentSource = $SSA | Get-SPEnterpriseSearchCrawlContentSource -Identity $ContentSourceName

$ContentSource.StartIncrementalCrawl()

It’s an easy one, isn’t it?

Next step is checking the status of this content source. We need this for several reasons, for example we want to start the crawl only if it’s in Idle status, or we want to display the current status of the crawl in every minute, etc.

Here is the PowerShell command you need:

$ContentSource.CrawlStatus

What values can it have? Here you are, the list of crawl statuses:

  • Idle
  • CrawlStarting
  • CrawlingIncremental / CrawlingFull
  • CrawlPausing
  • Paused
  • CrawlResuming
  • CrawlCompleting
  • CrawlStopping

Ok, we can decide the status now, we can start a crawl. How to make it event driven? Here is the logical sequence we have to follow:

  1. Start the crawl of a content source.
  2. Wait until it’s done.
  3. Take the next content source and repeat the steps 1. and 2. until you’re done with each content source.
  4. Repeat this sequence.

First step is creating a function if we want a nice code. Here you go, my first one:

function Crawl {             #Start crawling     $ContentSourceName = $args[0]     $ContentSource = $SSA | Get-SPEnterpriseSearchCrawlContentSource –Identity $ContentSourceName     $CrawlStarted = Get-Date

#Check crawl status     if (($ContentSource.CrawlStatus -eq “Idle”) -and ($CrawlNumber -eq 0)) {             $ContentSource.StartIncrementalCrawl()         Start-sleep 1         Write-Host $ContentSourceName ” – Crawl Starting…”

do {             Start-Sleep 60     # Display the crawl status in every 60 seconds             $Now = Get-Date             $Duration = $Now.Subtract($CrawlStarted)    # Duration of the current crawl             $Speed = $ContentSource.SuccessCount / $Duration.TotalSeconds    # Speed of the current crawl, docs/sec                         Write-Host $ContentSourceName ” – ” $ContentSource.CrawlState (Get-Date).ToString() “-” $ContentSource.SuccessCount”/” $ContentSource.WarningCount”/” $ContentSource.ErrorCount “(” (“{0:N2}” -f $Speed) ” doc/sec)”         } while (($ContentSource.CrawlStatus -eq “CrawlStarting” ) -or ($ContentSource.CrawlStatus -eq “CrawlCompleting”) -or ($ContentSource.CrawlStatus -eq “CrawlingIncremental”) -or ($ContentSource.CrawlStatus  -eq “CrawlingFull” ))

Write-Host $ContentSourceName ” – Crawling Finished”         Write-Host “”     } }

This is how you can call this function:

Crawl(“My Content Source”)

Some additional steps you might need:

  • If you want to run this script once a day (need daily incrementals only but would like to be done as quick as possible), just schedule this script as a Windows task.
  • If you want to run this script during your day only (and release the resources for some other jobs for nights, for example), you can do the start in the morning and start in the evening logic. I’ve made a simple example in my blog post a few months ago.
  • If you want to run this sequence all day long, you might insert this logic into an infinite loop. (But be careful, sometimes you’ll need to run full crawl and then you have to stop running this script.)
  • You can insert some other steps into this script too. If you want to do something (logging, sending some alerts, etc.) when the crawl starts / stops, just do that here. It’ll be your custom event handler on the crawl events.
  • You can even write the output of this script to a file, so that you’ll have your own crawl log.

The scripts above works fine with both SharePoint Search and FAST Search for SharePoint. Enjoy!

How to check the Crawl Status of a Content Source

As you know I’m playing working with SharePoint/FAST Search a lot. I have a lot of tasks when I have to sit on the button F5 while crawling and check the status: is it started? is it still crawling? is it finished yet?…

I have to hit F5 in every minute. I’m too lazy, so decided to write a PowerShell script that does nothing but checking the crawl status of a Content Source and writes it to the console to me. And I can work on my second screen while it’s working and working and working – without touching F5.

The script is pretty easy:

$SSA = Get-SPEnterpriseSearchServiceApplication -Identity “Search Service Application” $ContentSource = $SSA | Get-SPEnterpriseSearchCrawlContentSource -Identity “My Content Source”

do {     Write-Host $ContentSource.CrawlState (Get-Date).ToString() “-” $ContentSource.SuccessCount “/” $ContentSource.WarningCount “/” $ContentSource.ErrorCount     Start-Sleep 5 } while (1)

Yes, it works fine for FAST (FS4SP) Content Sources too.

How to Schedule Crawl Start/Pause in SharePoint 2010 Search by PowerShell

In case of having not strong enough hardware there’s a pretty common request for start the crawl in the evening and pause in the next morning, before the work day starts. Scheduling the start of Full/Incremental Crawl is pretty easy from the admin UI, but you have to do some trick if you want to schedule the pause too. Here is my favorite trick: use PowerShell!

Here is what I do here:

  1. Create a script to start/resume the crawl (CrawlStart.ps1).
  2. Create a script to pause the crawl (CrawlPause.ps1).
  3. Schedule the script CrawlStart.ps1 to run in the evening (like 6pm).
  4. Schedule the script CrawlPause.ps1 to run in the morning (like 6am).

Is it simple, right? 😉

Here are some more details.

First, we have to know how to add the SharePoint SnapIn to PowerShell. Here is the command we need: Add-PSSnapin Microsoft.SharePoint.PowerShell.

Second, we have to get the Content Source from our Search Service Application:

$SSA = Get-SPEnterpriseSearchServiceApplication -Identity “Search Service Application” $ContentSource = $SSA | Get-SPEnterpriseSearchCrawlContentSource -Identity “My Content Source”

Then we have to know how to check the status of this content source’s crawl: $ContentSource.CrawlStatus. Here are the available values:

  • Idle
  • CrawlStarting
  • CrawlingIncremental / CrawlingFull
  • CrawlPausing
  • Paused
  • CrawlResuming
  • CrawlCompleting
  • CrawlStopping

Finally, we have to know how to start/pause/resume the crawling:

  • Start Full Crawl: $ContentSource.StartFullCrawl()
  • Start Incremental Crawl: $ContentSource.StartIncrementalCrawl()
  • Pause the current crawl: $ContentSource.PauseCrawl()
  • Resume the crawl: $ContentSource.ResumeCrawl()

That’s it. Here are the final scripts:

1. CrawlStart.ps1

Add-PSSnapin Microsoft.SharePoint.PowerShell

$SSA = Get-SPEnterpriseSearchServiceApplication -Identity “Search Service Application” $ContentSource = $SSA | Get-SPEnterpriseSearchCrawlContentSource -Identity “My Content Source”

if ($ContentSource.CrawlStatus  -eq “Idle” ) {         $ContentSource.StartIncrementalCrawl()     Write-Host “Starting Incremental Crawl”

if ($ContentSource.CrawlStatus  -eq “Paused” ) {         $ContentSource.ResumeCrawl()     Write-Host “Resuminging Incremental Crawl” }

2. CrawlPause.ps1

Add-PSSnapin Microsoft.SharePoint.PowerShell

$SSA = Get-SPEnterpriseSearchServiceApplication -Identity “Search Service Application” $ContentSource = $SSA | Get-SPEnterpriseSearchCrawlContentSource -Identity “My Content Source”

Write-Host $ContentSource.CrawlState

if (($ContentSource.CrawlStatus  -eq “CrawlingIncremental” ) -or ($ContentSource.CrawlStatus  -eq “CrawlingFull” )) {         $ContentSource.PauseCrawl()     Write-Host “Pausing the current Crawl”     }

Write-host $ContentSource.CrawlState

And finally, you have to schedule these tasks as a Windows job, by using these actions: powershell –command & ‘C:ScriptsCrawlStart.ps1’ to start and powershell –command & ‘C:ScriptsCrawlPause.ps1’ to pause your crawl.

Ps.: These scripts work fine for FAST Content Sources in SharePoint too, in this case you have to use the FAST Content SSA.

Enjoy!

Troubleshooting: FAST Admin DB

The environment:

A farm with three servers: SharePoint 2010 (all rules), FAST Admin, FAST non-admin. SQL is on the SharePoint box too.

The story:

Recently, I had to reinstall the box with SP2010 and SQL. Everything seemed to be fine: installing SQL, SP2010, configuring FAST Content and Query Service Apps, crawl, search… It was like a dream, almost unbelievable. But after that I started to get an error on BA Insight Longitude Connectors admin site, when I started to play with the metadata properties: Exception configuring search settings: … An error occurred while connecting to or communicating with the database…

I went to the FAST Query / FAST Search Administration / Managed Properties, and got this error: Unexpected error occurred while communicating with Administration Service

Of course, I went to the SQL Server’s event log, where I found this error: Login failed for user ‘MYDOMAINsvc_user’. Reason: Failed to open the explicitly specified database On the Details tab I could see the ‘master’ as the related DB.

I went to SQL Server Profiler, but the Trace told the same.

Of course, I checked everything around FAST: the user was in the FASTSearchAdministrators group, permission settings were correct on SQL, etc.

Finally, I found what I was looking for: Event Log on the FAST admin server contained this error: System.Data.SqlClient.SqlException: Cannot open database “FASTSearchAdminDatabase” requested by the login. The login failed. Login failed for user ‘MYDOMAINsvc_user’

The solutions:

Yes, it was what I was looking for: I really forgot to restore the FASTSearchAdminDatabase. But what to do if you don’t have a backup about that?

Never mind, here is the Powershell command for you:

Install-FASTSearchAdminDatabase -DbServer SQLServer.mydomain.local -DbName FASTSearchAdminDatabase

Voilá, it’s working again! 🙂

PowerShell script for exporting Crawled Properties (FS4SP)

Recently, I was working with FAST Search Server 2010 for SharePoint (FS4SP) and had to provide a list of all crawled properties in the category MyCategory. Here is my pretty simple script that provides the list in a .CSV file:

$outputfile = “CrawledProperties.csv”

if (Test-Path $outputfile) { clear-content $outputfile }

foreach ($crawledproperty in (Get-FASTSearchMetadataCrawledProperty)) {     $category = $crawledproperty.categoryName     if ($category = “MyCategory”)     {

# Get the name and type of the crawled property         $name = $crawledproperty.name         $type = $crawledproperty.VariantType

switch ($type) {             20 {$typestr = “Integer”}             31 {$typestr = “Text”}             11 {$typestr = “Boolean”}             64 {$typestr = “DateTime”}             default {$typestr = other}

}         # Build the output: $name and $typestr separated by           $msg = $name + ” ” + $typestr

Write-output $msg | out-file $outputfile -append     } }

$msg = “Crawled properties have been exported to the file ” + $outputfile write-output “” write-output $msg write-output “” write-output “”

Best Practices for FAST Search Server for SharePoint 2010 Installation [Updated]

Last week, I’ve been honored as a speaker of Best Practices Conference 2010 in Washington DC, and got a lot of questions about the installation of FAST Search Server for SharePoint 2010. So that, I’ve decided to collect my tips and best practices and publish here. I know my list is not full and definitely will be growing in the future.

  1. Install FAST Search Server to one or more separated box, not to the SharePoint server(s). – Why? Because in this case not only the performance will be better but you’ll have a more manageable and maintainable environment. For example, checking the performance or troubleshooting is much easier in this case. Of course, exceptions can be happened always. Although the best practice is to have separated App Servers and WFEs as well as separated Admin and Non-Admin FAST servers in the farm, in some cases we should deploy some different architecture. Even in my case: as you know, I make a lot of sessions and presentations, online as well as personally. If you have speaker experience, you know: basically we have three options to make a demo during the presentation:
    • Bring the farm to the venue. – Yes, sometimes it’s an option, for example if the event is in the office of your company. But in most cases this option is not an option, you cannot bring 2-3-5-… boxes to a foreign conference.
    • Connect to a remote farm. – If you leave the farm in the office (or at home), you can connect to that from your session via Remote Desktop, or simply in the browser, for example. It’s a good choice if you have a good Internet connection, otherwise all your presentation can be collapsed on the demo unable to work.
    • Have the demo environment on your laptop. – Ok, you can do that only in case if you have a proper laptop. But even if you have, it’s not to easy to have a full SharePoint 2010 farm, SQL, AD and FAST Search Server all together on your laptop. But I can say: it’s not impossible 🙂 I have installed a VMware image to my laptop (HP8530p with 8GB RAM), Win2008 R2 as a domain controller, SQL Express and SharePoint Server 2010 Enterprise on it, as well as FAST Search Server on the same virtual box. So it works, but of course it’s not a production environment: doesn’t contain too much content, there are no crawlings scheduled, etc. It’s just a simple demo machine for my sessions. So please don’t do this kind of installation, except you really need a demo environment! In production, never. Never! Please…
  2. After installing the FAST bits, use the FAST Configuration Wizard for the final steps, but don’t forget the additional steps! – First of all, the Config Wizard doesn’t create and import the certificates, so you have to do that manually. The following steps have to be performed:
    • Export the certificate with the following Powershell command running in the SharePoint 2010 Management Shell:

      $stsCert = (Get-SPSecurityTokenServiceConfig).LocalLoginProvider.SigningCertificate

      $stsCert.Export(“cert”) | Set-Content -encoding byte MOSS_STS.cer

    • Import the certificate on the FAST Search Server, either in the Certificates MMC Console or by running Powershell again.

      image

    • [Update 03/10/2010] If you get a Certificate error during the configuration (check the config log for the error message), don’t panic: close the FAST Search Configuration Wizard, and start it again as Administrator (‘Run as Administrator’). It’ll work.

  3. Check InstallInfo.txt before deploying the FAST Service Applications. – Even if you’ve installed a lot of FAST Servers and think you know every port numbers and settings, please check the InstallInfo.txt. It contains all URLs and port numbers for the proper deployment, and believe me: it worth some seconds!
  4. Doublecheck all the URLs and port numbers when deploying the FAST Service Applications.
  5. If you’re ready with installation and configuration, create a FAST Search Center, and don’t forget to crawl your content!
    • [Update 03/11/2010] If the SharePoint content cannot be crawled in the FAST Content SSA, and the only thing you get instead of the result set is an error message like We did not find any results, but anything else (see point 6.) looks fine, check the Firewall on the servers in the farm. You might have an error Unable to resolve Contentdistributor in the Event Log too.
  6. If the content has been crawled, check the environment with querying something on the FAST Search Center. If you get some error, don’t panic! 🙂
    • The search request was unable to connect to the Search Service – This error means that SharePoint is unable to make connection to the FAST engine. Its reason can be vary on your settings, please check all of them:
      • Check the services with the command nctrl status.
      • Check the URLs and port numbers: compare if your Service Application settings fit to the InstallInfo.txt. Don’t forget to check the protocols as well (HTTP or HTTPS), as the port numbers can be different with the different protocols. And don’t forget to check both FAST Content SSA and FAST Query SSA.
      • Check if the FAST Query SSA is associated to your Web Application, and this is the default one.
      • Check if the credential has been imported properly to the FAST Server. Don’t copy it from another farm!
      • [Update 03/10/2010] Check [FASTSearchDirectory]etccontentdistributor.cfg for the Content Distributor URL and port number.
      • [Update 03/11/2010] Check if the FAST Server has been restarted after the installation. If not, probably you’ll get an error” Unexpected error occurred while communicating with Administration Service too when trying to configure the FAST settings on SharePoint 2010 (for example, keywords, user context, etc.) Restart it (again) if it’s necessary.
    • Unable to display this Web part – This error message means that probably you have something misconfigured in FAST Search Server for SharePoint 2010.
      • Check the services with the command nctrl status.
      • Check the Managed Properties settings on the FAST Query SSA / FAST Search Administration page. If you get an error here, the FAST Search Server definitely doesn’t work properly and you should check it not the SharePoint. If you get the list of the managed properties, the FAST Search Server component is definitely running.
      • Check the URLs and port numbers: compare if your Service Application settings fit to the InstallInfo.txt. Don’t forget to check the protocols as well (HTTP or HTTPS), as the port numbers can be different with the different protocols. And don’t forget to check both FAST Content SSA and FAST Query SSA.
    • [Update 03/11/2010] Unexpected error occurred while communicating with Administration Service – The FAST Search Server’s Administration Service cannot be reached by SP2010.
      • Check the services with the command nctrl status.
      • Check the URLs and port numbers: compare if your Service Application settings fit to the InstallInfo.txt. Don’t forget to check the protocols as well (HTTP or HTTPS), as the port numbers can be different with the different protocols.
      • Check if the FAST Server has been restarted after the installation. Restart it (again) if it’s necessary.
    • [Update 03/10/2010] If the Word/PowerPoint thumbnails and preview don’t work, check Office Web Apps (try to open some documents in the browser). Be careful, because on DCs you can have a strange issue: if the operation ‘Edit in Browser’ works but ‘View in Browser’ doesn’t, check this article, run IISRESET and you might run also SharePoint 2010 Config Wizard. After these steps Office Web Apps should start to work properly. You might also have a Full Crawling before having it working on FAST Result Pages.
    • [Update 03/10/2010] Sometimes it looks like FAST Search results are got from an old index: the result set contains some items don’t exist anymore and doesn’t contain some new items. If a new crawling doesn’t help, clear the FAST Content Collection by running this command in the FAST Search Server for SharePoint 2010 Management shell: Clear-FASTSearchContentCollection –Name <ContentCollectionName> and run an IISRESET. Of course, a new crawling will be needed to get the correct index again.
    • [Update 04/12/2010] In case of you get an error like Failed to communicate with WCF service.  when running FAST admin functions (for example, Clear-FASTContentCollection), check if you have FAST admin privileges on the server. If not, try again to run the command as a different user with admin privileges.

image

Stay tuned, more details and updates are coming later, regarding to my experiences!