Friday, September 14, 2012

Sharepoint 2007, crawling the index server

We decided that we needed to crawl the index server and not the web servers in our sharepoint environment.  We are also using F5 for load balancing the web servers.  There are some notes and articles that describe methods of setting up the crawl on a specific web server from the web front end servers but not one that specifically uses the index server.  This method by the way can be done on an additional server that is not the index server and is not part of the F5 web balanced servers.  I think the previous sentence would be perfect for those environments where they can afford the extra server.

So how do we do this?

We encapsulated both the alternate access mappings for the F5 load balancing and the server name mapping for the index server crawl.  Then set the configure office sharepoint server search service setting to not use a dedicated front end, that keeps the host file from being overwritten.

The normal approach would be to set the configure office sharepoint server search service setting to not use a dedicated front end, that keeps the host file being overwritten, in our case the F5 overruled the settings we would put in the host file and get lost on the index server.  This is where we used the server name mapping.

By setting the server name mapping to index on the index server and to reflect the F5 alias we were able to force the crawl to the index server but the users would then hit the F5 alias and be load balanced in their search.

We found ourselves in situation where if we tried to use a dedicated index server that was not part of the F5 alias that the index server would get lost and the crawl would fail. It would reflect an error that indicated it could not see the server, even after we set the ip’s in the host file to be the index crawler.


So here is how we made it work.

For the alternate access mappings we created an Internal url pointing to http://F5alias.com with the zone set to default and a public url pointing to https://f5alias.com .

Then an additional alternate access mapping that set the internal url pointing to http://indexserver.com with the zone set to custom and a public url of http://indexserver.com .

Then we created a server name mapping for an indexing address of http://indexserver.com and address for display in search results of https://F5alias.com .

When the crawl is processed it creates the index on the indexserver and displays the f5 load balance alias. The user hits the f5 load balance alias and is pointed to one of the servers in the f5.




Friday, June 8, 2012

Locks, Locks whose got the locks

I've been doing this IT work for quite sometime now.  In the past we worked with Dec equipment and local area vax clusters and RDB.  No we work mostly with Sql Server and some Oracle, but locking and blocking locks can still be issues.

On the older lavc we could have a lock on a server that blocked a lock on another server, so we had to be sure that we looked in multiple areas for locking and blocking.  On rac, link servers, linked instances etc we need to be aware that blocking can be occurring on servers other than the one we are currently working on or looking at.  So we need to be looking and aware of what could be impacting the application.

How are we aware that a remote lock might be causing us some trouble.  Well first we would want to understand what the application is doing, is it making a remote call?  If it is not then we would focus on our current server and look for resource issues there.  If it does make remote calls, we would still first look at our current server and resource issues there but if they appear to be okay but our process does not appear to be making any progress we would want to look at the resources on the remote server and see how things look there.  We could very well have a remote locking block occurring that needs to be addressed.  Other resource issues can always come into play as well.

Good luck, keep dba'ng and looking for resources.

Mark

Friday, May 25, 2012

Gathering Database sizing information

So I have been capuring my growth data for quite some time using linked servers and sql code.  It has been working and giving me the information that I needed so I was happy. 

The server that carries my database of information needed to be retired.  Presto, chango no problem I moved my data to a new server with a newer verion of sql server and I'm happy.  However, I really did not want to have a lot of linked server connections around just so I could gather my data into my new server.  In comes PowerShell. Da-ta-da!!

I have been working with Powershell on some active directory work and so have gained some knowledge of it.  I know I can be doing more with it on the database side of life and decided this was a good opportunity.  So now I gather my data with a couple of powershell scripts.

I use one to gather information about filesizes and another gather the physical disk free space.
So I looked at what some other folks are doing with powershell and compared that with what I was doing with sql server and merged those ideas.  Actually with the snapins and sqlcmd it was not a very big switch.  I used Tim Radney's concept of passing in the listofservers and stuck in my table data.  Ed Wilson and Dale Kelley look to be using a file for this input and I am going to look into that.

For the server freespace:
Add-PSSnapin SqlServerCmdletSnapin100

Add-PSSnapin SqlServerProviderSnapin100
#
$listOfServers = "a", "b", "c","d", "e"
$query = "
create table #FreeSpace (partition char(1), mb_free int)
insert into #FreeSpace exec xp_fixeddrives
select null,@@Servername, partition,mb_free from #FreeSpace"
$destinationServerName = "servername"
$destinationDatabaseName = "Databasename"
foreach($server in $listOfServers)
{
$results = Invoke-Sqlcmd -ServerInstance $server -Query $query
Write-Host "Arg: $results"
$results
$connectionString = "Data Source=$destinationServerName;Integrated Security=true;Initial Catalog=$destinationDatabaseName;" ;
$bulkCopy = new-object ("Data.SqlClient.SqlBulkCopy") $connectionString ;
$bulkCopy.DestinationTableName = "DatabaseTableName" ;
$bulkCopy.WriteToServer($results) ;
}

For the file size data:
Add-PSSnapin SqlServerCmdletSnapin100

Add-PSSnapin SqlServerProviderSnapin100
#
$listOfServers = "a", "b", "c","d", "e"
$query = "sp_MSforeachdb 'use [?]
select null,
@@servername AS ServerName
,DB_NAME() AS DatabaseName
,name
,filename
,(size*8)
,[File_Size_MB] = convert(decimal(12,2),round([size]/128.000,2))
FROM dbo.sysfiles;'"
$destinationServerName = "servername"
$destinationDatabaseName = "Databasename"
foreach($server in $listOfServers)
{
$results = Invoke-Sqlcmd -ServerInstance $server -Query $query
Write-Host "Arg: $results"
$results
$connectionString = "Data Source=$destinationServerName;Integrated Security=true;Initial Catalog=$destinationDatabaseName;" ;
$bulkCopy = new-object ("Data.SqlClient.SqlBulkCopy") $connectionString ;
$bulkCopy.DestinationTableName = "database_table_name" ;
$bulkCopy.WriteToServer($results) ;
}

I run it as a scheduled task and gather the data on a weekly basis.  For me it is working great.

Happy dba'ing and remember:

Help I'm a DBA...
there really is a lot of resource help out there.

Things a dba should do






So I have been looking at what it is that a dba needs to do on a regular basis and when does the task need to be performed. Here is what I have come up with so far.








  • Daily Tasks

  •    
  • Review emails for failure notifications
    Review emails for user issues/tickets

    o Confirm that data is available
    o Review daily morning logs for
     Login issues
     Errors
    o Confirm Backups
     Full
     Differential
     Transaction Logs
    o Confirm Scheduled jobs
    o Review Mirroring
    o Review log shipping
    o Review Snapshots
    o Delete Snapshots
    o Review Windows Event Logs
    o Review Audit/Trace logs if set up
    o Review Sql Server Log files
    o Review Sql Agent Log Files
    o Review Cluster/Mirror logs
    o Troubleshoot issues





Weekly Tasks
o Review articles, blogs
o Review Monday logs
 Space
 DBCC checkdb
 Rebuild indexes
 Reorganize indexes
 Update statistics
 Gather security audit data into an audit table
 Review security audit data
o Confirm sp_configure settings document
o Review Code for weekly rollout
o Confirm email functionality
o Weekly Code rollout
o Review new objects
o Rollout new objects
o Review scripts
o Rollout Scripts
o Create/Review metadata baseline
o Review Change history
• Monthly Tasks
o Roll the alert logs
o Microsoft Security Patch Tuesday (Second Tuesday of the month)
o Review and test security patches
o Rollout Security Patches if required
o Build monitor baselines
 Cpu
 Memory
 Network
o Compare previous baselines to current for potential performance issues
o Test restores/recovers
o Test Cluster failovers
o Review products coming up, download standard patches and beta releases
o Perform server audits
 Confirm with License Manager or Asset manager John Humphries any unknown servers
 Confirm what is on what server, databases, applications
 Confirm what edition, version and patch level for the database
 Confirm what os edition, version and patch level
 User/Group permissions
• Quarterly Tasks
o Review Service Packs
o Test Service Packs
o Rollout Service Packs
o Review Standard Patches
o Test Standard Patches if patch is desired
o Rollout Patches if patch is desired
• Semi-Annual Tasks – starting June 1 & January 1
o Review end of product support
o Pre-License Review June
o Post-License Review January
o Pre Review dr docs June
o Post Review dr docs January
o Prep for upgrade to new version
 June Develop plan and methodology
 January Review upgrade rollout
• Annual Tasks -- starting October 1
o Formal License Review
o Dr test
o Upgrade to new versions
• On demand tasks
o Overlay/refresh data
o Installations
o Data modeling tasks
o Server Request forms
o Backup Request forms




Disk space, dbcc shrinkfile can be your friend

So I just had a friend who had a disk space problem on one of her sql server test environments. Basically the users were doing more work, training and using more disk space then expected. Not all databases are created with multiple files and spreading out of the data as a matter of fact more sql server database have been set up, configured as simple databases than not.

For temporary relief a new file was added to the database on an additional drive while they considered what they wanted to do. So the following was performed to allow the users to continue their work.

ALTER DATABASE [test] ADD FILE ( NAME = N'test_data1', FILENAME = N'N:\MSSQL10.MSSQLSERVER\MSSQL\DATA\test_data.ndf' , SIZE = 307200KB , FILEGROWTH = 102400KB ) TO FILEGROUP [PRIMARY]

This added a new file on her additional drive of 300 mb with autogrow set to 100 mb.

Did they want to add disk or limit what the user community was doing on the system. The option to add disk space was decided upon and additional space was added by the systems team to her other drive.

Now she wanted to get rid of the file that was added to her additional drive and get back to a very simple database file configuration. So what could she do?

DAT-TA-DA!

DBCC shrinkfile comes to the rescue. There is an option on the dbcc shrinkfile called emptyfile
emptyfile basically moves all of the data out of the current file and into other files in the same filegroup.

By performing the shrinkfile with the emptyfile option she would be able to remove the additional file that had been created.
So I recommended that she do the following:

DBCC SHRINKFILE (test_data1, EMPTYFILE);
GO

Then to drop the data file from the database

ALTER DATABASE test
REMOVE FILE test_data1;
GO

This moved the data back her original file and then allowed her to get rid of the additional file that was added. Now she can go back and consider how she really wants to handle her files and filegroup structure but that is a different conversation.

Happy dba'ing and remember:

Help I'm a DBA...
there really is a lot of resource help out there.