Doug on SharePoint

Stuff on SharePoint and other things.

Weird XML problem adding AAMs or taking the CA role on a new server

The other day I was upgrading the WFE servers to 64Bit and were rotating some WFEs in and out of the farm. While doing so I was moving the Central Admin around and got a failure running the command psconfig –cmd adminvs –provision.

The exception in the PSCDiagnostics log was this:

10/28/2009 10:48:59 7 ERR An exception of type System.Xml.XmlException was thrown. Additional exception information: Unexpected end of file while parsing Name has occurred. Line 27, position 14.

System.Xml.XmlException: Unexpected end of file while parsing Name has occurred. Line 27, position 14.

at System.Xml.XmlTextReaderImpl.Throw(Exception e)

at System.Xml.XmlTextReaderImpl.ParseQName(Boolean isQName, Int32 startOffset, Int32& colonPos)

at System.Xml.XmlTextReaderImpl.ThrowTagMismatch(NodeData startTag)

at System.Xml.XmlTextReaderImpl.ParseEndElement()

at System.Xml.XmlTextReaderImpl.ParseElementContent()

at System.Xml.XmlLoader.LoadNode(Boolean skipOverWhitespace)

at System.Xml.XmlLoader.LoadDocSequence(XmlDocument parentDoc)

at System.Xml.XmlDocument.Load(XmlReader reader)

at System.Xml.XmlDocument.LoadXml(String xml)

at Microsoft.SharePoint.Administration.SPAlternateUrlCollection.HasMissingUrl(String xml)

at Microsoft.SharePoint.Administration.SPContentDatabase.UpdateAlternateAccessMapping(SPAlternateUrlCollection collection)

at Microsoft.SharePoint.Administration.SPAlternateUrlCollection.UpdateAlternateAccessMappingInContent()

at Microsoft.SharePoint.Administration.SPAlternateUrlCollection.Update()

at Microsoft.SharePoint.Administration.SPAlternateUrlCollection.Add(SPAlternateUrl alternateUrl, Boolean fUpdate, Boolean throwIfExists)

at Microsoft.SharePoint.Administration.SPAdministrationWebApplication.Provision()

at Microsoft.SharePoint.Administration.SPWebServiceInstance.Provision()

at Microsoft.SharePoint.PostSetupConfiguration.CentralAdministrationSiteTask.ProvisionAdminVs()

at Microsoft.SharePoint.PostSetupConfiguration.CentralAdministrationSiteTask.Run()

at Microsoft.SharePoint.PostSetupConfiguration.TaskThread.ExecuteTask()

You will also be able to see these errors in the Application Event log with event IDs 100 and 104 and they essentially say the same thing.

The app pool and the web site provisioned but the Central Admin doesn’t show that server as having that role, the CA even works from that server. While doing some mucking around (that’s a technical term if you don’t know it) I tried to change the AAMs to include the new servers name in the AAM list. Well basically I got the same error just in the “friendly” SharePoint error as seen below

Finally broke down and gave Microsoft a call and after some looking around I have found out that there was an issue that was fixed in the re-release of the SharePoint’s August CU. Well that was fine but what exactly is it that was going on?

You can see in the KB what was going on: http://support.microsoft.com/kb/2000628

In a nutshell, in the central admin content db a field for the AAMs is too short — nvarchar(1023). The AAM XML in the db is truncated anything after 1023 characters and you have a malformed XML and thus the end of file exception.

This was causing a problem adding the CA because it needs to read that field and update it…This will also be a problem with any web apps which the AAM XML exceeds the 1023 limit – you would reach this pretty quickly if you added 4-5 AAMs for your sites. Here is an example of the XML, I formatted it to make it easier to read:

<AlternateDomains Count=”4″ Name=”Some Web App”>
<AlternateDomain>
<IncomingUrl>http://SomeIncomingURL#1.com</IncomingUrl&gt;
<UrlZone>Default</UrlZone>
<MappedUrl> http://SomePublicURL.com </MappedUrl>
</AlternateDomain>
<AlternateDomain>
<IncomingUrl> http:// SomeIncomingURL#2.com </IncomingUrl>
<UrlZone>Default</UrlZone>
<MappedUrl>http://SomePublicURL.com</MappedUrl&gt;
</AlternateDomain>
<AlternateDomain>
<IncomingUrl>http:// SomeOtherIncomingURL#3.com </IncomingUrl>
<UrlZone>Default</UrlZone>
<MappedUrl> http://SomePublicURL.com </MappedUrl>
</AlternateDomain>
<AlternateDomain>
<IncomingUrl> http://SomeOtherIncomingURL#4.com </IncomingUrl>
<UrlZone>Default</UrlZone>
<MappedUrl> http://SomePublicURL.com </MappedUrl>
</AlternateDomain>
</AlternateDomains>

That was about 870 characters, give-or-take. You can see how we can reach the 1023 limit pretty easily.

Disclaimer: DO NOT DO THIS ON YOUR PRODUCTION FARM WITHOUT GETTING APPROVAL FROM MICROSOFT SUPPORT OR YOU MIGHT BECOME UNSUPPORTED!

Now the workaround (approval was given) was to run the following on the Central Admin Content Database (needed to do this on the CA content DB because adding the new Central Admin was failing) Delete from DatabaseInformation where Name = ‘AlternateAccessMappingXml’

That was it…deleting from the DB allowed the process to read the XML from the field (which wasn’t there) and continue without error. Now if you still have AAMs that will cause the XML to exceed 1023 then it will still be truncated but the write operation will still succeed and you can go about your business. Next time there will still be an problem

Something to note: If you are consolidating a lot of portals into one central SharePoint portal and you plan to point your old URLs to the new portal, you should consider updating the system to at least August CU.

Advertisements

10/30/2009 Posted by | Administration, SharePoint, Upgrade | , , | Leave a comment

October CU is out

The October Cumulative Update for WSS and Moss is out.

The detailed information for WSS is here: http://support.microsoft.com/kb/974989

The Detailed information for MOSS is here: http://support.microsoft.com/kb/974988

Downloads can be found here:

WSS: http://support.microsoft.com/hotfix/KBHotfix.aspx?kbnum=974989

MOSS: http://support.microsoft.com/hotfix/KBHotfix.aspx?kbnum=974988

 

If you need to know how to slipstream the install have a look at this TechNet article — http://technet.microsoft.com/en-us/library/cc261890.aspx

On a side note, I ran into a weird problem that is fixed in the August CU that I will tell you about next week; it caused some really weird issues.

10/30/2009 Posted by | SharePoint | | Leave a comment

New Technical Diagrams for Sharepoint 2010

These are just released on TechNet. SharePoint 2010 technical diagrams…These will be a big help with understanding and planning http://technet.microsoft.com/en-gb/library/cc263199(office.14).aspx

10/19/2009 Posted by | 2010, SharePoint | Leave a comment

Migrating SharePoint to 64 Bit — The Databases

This is the second post on my migrating series and the first step in the process….

Because the impact on the users is to be minimized, this is taking place on A Friday night…Argh! Anyway that is how it goes in this business, it’s gonna be a long night.

So there are a couple of ways to upgrade the databases, in place keeping the same name, move the DBs to a server with the same name, or move the DBs to another server with a different name. I will spare the specific details, you can find specific information on TechNet here. I can see this decision being driven not by what might be the easiest or best but what hardware you may or may not have. No matter what method is chosen, unless you load the server OS or are the DBA, there isn’t a lot of involvement on the SharePoint side for this phase of the migration, at least they way it is being done. However, for me, this is the most risky and scariest part…

The process…
There isn’t any hardware to create a parallel environment or an extra database server to migrate to so the process was to do the migration in place with the current servers.

Don’t do anything in your environment until you TEST, TEST, TEST. When you think you have it TEST IT AGAIN! Remember, every environment is different with it’s own quirks and issues.

Here is what was was done:

  1. Full backup on the databases early enough for the backups to finish before starting. If you have to start a day or so before then start then.
  2. Stop the Farm.
  3. Run any transaction log backups on the databases
  4. Detach the databases from SQL (this might not be necessairy but it makes me feel better)
  5. The DBs are on SAN so the SAN was detached from the W2K3 server
  6. Reload the server with W2K8 and any post installation configurations for the environment
  7. Reattach the SAN to the server. W2K8 should see the disk and not be a problem (see notes)
  8. Load SQL 2008
  9. Reattach the DBs to SQL
  10. I would run a consistancy check on the DBs
  11. Restart the SharePoint Farm — TEST IT!

Some things to note:

  • You will want to record the SAN LUNs, size of volumes and their associated drive letter on the Windows 2003 server before restaging the server
  • When you reattach the SAN to the server with W2K8 the disks might look like foreign disks. you will need to import them
  • Check permissions on the reattached disks, local users will be orphaned but domain users and groups should be there and OK

Here are some commands to stop the services on the SharePoint servers….pretty basic and you probably know this already but here they are none-the-less.
To Stop

@echo off
@echo Stopping services...
iisreset /stop /noforce
net stop "Windows SharePoint Services Timer"
net stop "Windows SharePoint Services Administration"
net stop "Office SharePoint Server Search"
net stop "Windows SharePoint Services Search"
net stop "Windows SharePoint Services Tracing"

To start

@echo off
@echo Starting services...
net start "Windows SharePoint Services Tracing"
net start "Windows SharePoint Services Search"
net start "Office SharePoint Server Search"
net start "Windows SharePoint Services Administration"
net start "Windows SharePoint Services Timer"
iisreset /start

This should get you to 64 Bit SQL server. Remember it isn’t the only way to migrate to 64 Bit SQL Server and you need to TEST IT!

10/17/2009 Posted by | SharePoint | , , | Leave a comment

Migrating SharePoint to 64 bit — The Beginning

Well with the impending upgrade to SharePoint 2010 and 32 bit not being supported going forward, everyone that is on 32 bit will have to migrate their servers to 64 bit before they can upgrade. If I had my wish I would be able to stand up a parallel environment, including a mirrored SQL Server and migrate with very little downtime. However, in reality, that isn’t going to happen so there has to be a more creative way to migrate. There are several ways to go about it but this is the way I want to proceed and still try to minimize downtime…(wish me luck)

The migration needs to be in a series of steps and this is what I am wanting to do:

  1. Migrate the SQL Servers first
  2. Migrate the application tier
  3. Migrate the WFEs

There are a lot of things to consider:

    How many SharePoint Farms need to be migrated?
    Is there a parent child relationship with the SSP?
    Content Deployment?
    Indexing and Query?
    How much data is in your SharePoint environment?
    Can you have any downtime? How much?

There are more but you get the idea…it isn’t necessarily an easy task if your users demand little downtime and you have a large farm with a lot of data…don’t get me wrong there are risks every time you make this kind of change in any farm.

I will post some of the issues I see as the migration goes and what was done to either mitigate them, specific procedures done, or workarounds.

10/15/2009 Posted by | SharePoint | , , | Leave a comment

Moss Object Caching

First I wanted to thank Sean McDonough for his blog post MOSS Object Cache Memory Tuning is not an Intuitive Process it assisted a great deal with a stability problem we were having with a publishing portal.

So here was the problem: The company has an internal portal (yes it is on a 32 Bit machine) that is getting around 65-70K unique visitors per day. It wasn’t until the last couple of weeks that the traffic was that high, but due to a DNS change the old portal is now pointing to the new SharePoint Site. Soon after the change, the site just was not stable at all. We were seeing a lot of app pool resets, slow response, and a lot of out of memory errors.

After doing some searching we found that the Object cache settings was set to 1GB (the default is 100MB)! You can read about SharePoint caching here on TechNet. The thing is, the object cache sits in the worker process and on a 32 Bit system there is only the 2GB application memory space. With the added pressure on the application memory it just could not handle it.

Using Sean’s blog and his recommendation of looking at the SharePoint Publishing Cache/Total number of cache compactions, we reduced the cache to 300MB (it was set at 500MB as an arbitrary number that the application team would agree to) watched it for a day and we saw an increase in the compactions to 2-3, at 500 the counter was at 0. We then increased the memory to 350MB and watched it for a day and continued until the optimal setting was found for the Object Cache.

Imagine what would happen if you has a single web application for publishing sites? Every site collection admin could change this setting. Now just because the setting is at something like 500MB doesn’t mean that they will automatically reserve that amount of memory, it just means that if needed the cache will grow to that amount but on a 32 bit machine it would have a great impact on all of the sites in the web app. This got me thinking, how can I control this? I could not find a global setting in the central admin nor could I find an STSADM command (out of the box) to run. PowerShell!

I created a PowerShell script that I could use to loop thru the site collections on a web app and set the cache back to the default (or whatever you want it to be). Here is what I did.

[System.Reflection.Assembly]::LoadWithPartialName(“Microsoft.SharePoint”)

[System.Reflection.Assembly]::LoadWithPartialName(“Microsoft.SharePoint.Publishing”)

$site = new-object Microsoft.sharepoint.spsite(“http://mosssite&#8221;)

$webapp = $site.Webapplication

foreach ($sites in $webapp.sites){

$cacheSettings = new-object Microsoft.SharePoint.Publishing.SiteCacheSettingsWriter($sites.url);

if ($cacheSettings.objectcachesize -gt 100){

$cacheSettings.ObjectCacheSize = 100;

$cacheSettings.Update();

$sites.url;

}

}

Note: dont forget to dispose!

09/18/2009 Posted by | Administration, Powershell, SharePoint | , | 1 Comment

IIS 7 Upload limit

IIS 7 has a default upload limit if 30,000,000 bytes which is about 28.61 MB. When the setting in SharePoint is increased to say 100MB upload size, you still cannot upload files greater than the 28MB. How do you fix it? You can either install the IIS Administration Pack or modify the following file: C:\Windows\System32\inetsrv\config\applicationHost.config

You would add the following to the <requestFiltering> section

<requestLimits maxAllowedContentLength=”[length in bytes]” />

So for 100MB you would enter 104857600. It would be wise to make this a little larger than what you are setting in SharePoint for some wiggle room. This should allow you to upload files that are larger than the default 28.61MB.

08/21/2009 Posted by | IIS | Leave a comment

Another Content Deployment Problem:

Wow just when I thought we had our content deployment jobs fixed….

I worked another content deployment problem, this time I have the error!

Group cannot be found. at Microsoft.SharePoint.SPGroup.InitMember() at Microsoft.SharePoint.SPGroup..ctor(SPWeb web, ISecurableObject scope, SPUser user, String GroupName, Object[,] arrGroupsData, UInt32 index, Int32 iByParamId, SPGroupCollectionType groupCollectionType) at Microsoft.SharePoint.SPGroupCollection.get_Item(String name) at Microsoft.SharePoint.Deployment.RoleAssignmentXImport.UpdateAssignment(ImportStreamingContext context, SPWeb web, ISecurableObject obj, Boolean bAdd, String strUser, String strGroup, String strRole) at Microsoft.SharePoint.Deployment.RoleAssignmentXImport.ProcessElement(ImportStreamingContext context, XmlReader xr, SqlSession session) at Microsoft.SharePoint.Deployment.SqlImport.Run() at Microsoft.SharePoint.Deployment.SecurityObjectSerializer.SetObjectData(Object obj, SerializationInfo info, StreamingContext context, ISurrogateSelector selector) at Microsoft.SharePoint.Deployment.XmlFormatter.ParseObjectDirect(Object objParent, Type objectType) at Microsoft.SharePoint.Deployment.XmlFormatter.DeserializeObject(Type objectType, Boolean isChildObject, DeploymentObject envelope) at Microsoft.SharePoint.Deployment.XmlFormatter.Deserialize(Stream serializationStream) at Microsoft.SharePoint.Deployment.ObjectSerializer.Deserialize(Stream serializationStream) at Microsoft.SharePoint.Deployment.ImportObjectManager.ProcessObject(XmlReader xmlReader) at Microsoft.SharePoint.Deployment.SPImport.DeserializeObjects() at Microsoft.SharePoint.Deployment.SPImport.Run()</code>

Huh…What Group?

Let me give you a little bit of background on the environment. Content is being deployed to an externally facing server and consists of news & stuff and an area that is behind an FBA login. The path is set with the following: Deploy User Names is not checked and the Security Information is set to Role Definitions Only (not groups?). The main site is anonymous and the sites behind FBA have inheritance broken (more on that later).

On the source, a document lib had its inheritance broken and all of the groups removed, they didn’t want any publishers putting anything in there…(This is what was causing the job to have the error, more in a minute)

Still, what group?

First place to start is checking the content deployment logs….once it happens it will continue until you find that group…

You probably don’t have the ULS logging turned up for Content Deployment so you will need to do that…Once you have it turned up, run the job that is failing. It will error, check the logs…You should be able to find the error and a line or two above that error you should be able to find the site that that is causing that error. It will say something like not being able to get the group information.

Let’s talk about how inheritance is working with regards to content deployment (remember I am not pushing security, just Role Definitions – not groups?). When inheritance is broken, by default all of the group/security information remains on the site, we have all seen this. When the job is run, I expected to see the inheritance broken with all of the source groups still the same – not the case. What happened was inheritance was broken and all security information was removed – I need to check if some hotfix or SP2 changes this behavior and I will update.

Next…look at the site that is having the problem and compare the source and destination group information. Wait, the job removed all of the groups from the destination and the source groups were removed already….ok they are the same! Yes, they look the same but remember we have another variable, the changes that content deployment is trying to push to the destination.

Still haven’t figured out what group?

In figuring out what group, I inherited on both the source and destination and started comparing the two….

HA! In this case, there were some (3) SharePoint groups that were on the source and not the destination, so with that said all I did was add the groups on the destination at the same level (site collection for me) and ran the deployment job. Success!

Next the inheritance was broken on the inside and ran the job Success! – lastly I deleted the groups on the source and ran the job – SUCCESS!

That fixed it….

I will research why some of the groups were there and not others….remember, Role Definitions Only is not supposed to push the groups but some are pushed and some are not. I will update with what I find.

Recap:

  1. Turn up logging for Content Deployment
  2. Run the failed job
  3. Check the ULS logs for the error and determine what site is having the problem
  4. Start looking for differences in the groups
  5. Add the groups that are not on the destination that are on the source
  6. Run the job and hope you have found everything.
  7. If not wash, rinse, repeat…

07/18/2009 Posted by | Content Deployment | , | 2 Comments

Duplicate Content Type Error during Content Deployment Job

Had a problem the other day with content deployment.  Everything was running fine then the deployment jobs started failing with an error that started out with “Duplicate content type ‘Page'” ( I wish I had the entire message but the logs have rolled over since we have fixed it).  This content type was of the type ‘Page’ but it could be any of your content types.

Huh?

It turns out that the content deployment jobs don’t delete the content types from the document library on the destination if they have been removed from the source. In practice if you remove them from the source site and don’t ever use them, having them on the destination isn’t really a big deal….however, if you choose to put the content type back into the document library, then you have a problem. The content deploy job attempts to push the added content type and because it already exists, it is a duplicate the the job fails.

The error message doesn’t point you to the right document lib nor does it even point you to the right site. The best thing to do is to turn on auditing of content types in the site collection audit settings on the source.

If it happens again you will be able to see the site that it happened in…in this case there are a good 12 or so sites that are being deployed and by chance the auditing was turned on so we could see where the problem was happening (otherwise, you can remove the sites and add them back in one by one until the problem occurs again).

So how do you work around it? (no not 64bit)

All you need to do is delete the content type from the document lib on the destination and run your content deployment  job again, it should be successful.

Note: You should not be able to delete the content type if it is being referenced in the doc lib.

Prevention is another story.  Education of the user base and limiting the number of users that can perform this function is about the only way to prevent it…but now that you know how to fix it, you should be able to minimize the downtime of your content deployments.

07/17/2009 Posted by | Content Deployment | , | 1 Comment

Obligitory Welcome

Here is the obligitory welcome blog…..

So, welcome to my blog…

07/17/2009 Posted by | Uncategorized | Leave a comment