Showing posts with label setup. Show all posts
Showing posts with label setup. Show all posts

Wednesday, March 28, 2012

Help with SQL 2000 RAID setup...

Hello,
Have a SQL 2000 server, and need to make a final decision on my RAID setup
for best perfromance, cost, and drive usage.
MY database application is of a front office/backoffice type, which means
accounting payroll etc is involved as well. Most activity from the front
end are reads, with occ writes. The back office during payroll processing
will generate a lot of writes in a short period of time. I only have 50
users using this application.
I have A single processor dell server with a 1x6 backplane with 6 hot swap
bays. I currently have 3-36 GB drives to use.
I have the option to split the backplane. My RAID choices are as follows:
1x6 backplane (I think all I/O's go through 1 channel/port/cable on
controller to all 6 drives in this 1x6 backplane setup)
2 - Raid 5 arrays (3 drives each)
OR
RAID 1 (2 HD's) and a RAID 10 (4 HD's) array (all through single
port/channel/cable)
IF I split the backplane to a 2x3 backplane (uses 2 channels/ports/cables on
the controller)
2 - RAID 5 arrays (3 HD's each)
OR
1- RAID 1 array & 1- RAID 5 array on separate channels/ports (This leaves 1
bay open and disable my ability to use RAID 10)
I have the OS, SQL exe's, SQL databases, Trancaction logs and tempdb logs,
and backups to deal with.
I thought I read somewhere if the transaction log files are not on a RAID 1
volume by themselves, then it defeats the purpose. But I'm unsure if they
meant separate physical drives, or just separate volumes or partitions on the
same drives.
So my issues are I/O's from controller channels/ports, separate physical
RAID volumes, and then partitions (or volumes) on the RAID drives.
Based on this information, can anyone reflect from experience which of my
choices above would work out the best for my SQL application?
Thanks much,
MarkHi
With the number of drives you have available then you are going to have to
compromise in some way. Ideally you would separate OS, Pagefile, tempdb,
system databases, user database data files, user database log file all onto
separate sets of spindles or possibly multiple sets of spindles if you have
multiple files for each. Where you compromise may be dependent on the
application usage and things like the amount of memory installed/used. I
would probably go for the Raid 1/Raid 10 configuration with data and log
files together, but monitor how the system copes and see about moving things
around to get the best you can.
You may want to look at
http://www.microsoft.com/technet/prodtechnol/sql/2000/maintain/sqlops6.mspx
and
http://www.microsoft.com/technet/prodtechnol/sql/2005/physdbstor.mspx
John
"Mrpush" wrote:
> Hello,
> Have a SQL 2000 server, and need to make a final decision on my RAID setup
> for best perfromance, cost, and drive usage.
> MY database application is of a front office/backoffice type, which means
> accounting payroll etc is involved as well. Most activity from the front
> end are reads, with occ writes. The back office during payroll processing
> will generate a lot of writes in a short period of time. I only have 50
> users using this application.
> I have A single processor dell server with a 1x6 backplane with 6 hot swap
> bays. I currently have 3-36 GB drives to use.
> I have the option to split the backplane. My RAID choices are as follows:
> 1x6 backplane (I think all I/O's go through 1 channel/port/cable on
> controller to all 6 drives in this 1x6 backplane setup)
> 2 - Raid 5 arrays (3 drives each)
> OR
> RAID 1 (2 HD's) and a RAID 10 (4 HD's) array (all through single
> port/channel/cable)
>
> IF I split the backplane to a 2x3 backplane (uses 2 channels/ports/cables on
> the controller)
> 2 - RAID 5 arrays (3 HD's each)
> OR
> 1- RAID 1 array & 1- RAID 5 array on separate channels/ports (This leaves 1
> bay open and disable my ability to use RAID 10)
> I have the OS, SQL exe's, SQL databases, Trancaction logs and tempdb logs,
> and backups to deal with.
> I thought I read somewhere if the transaction log files are not on a RAID 1
> volume by themselves, then it defeats the purpose. But I'm unsure if they
> meant separate physical drives, or just separate volumes or partitions on the
> same drives.
> So my issues are I/O's from controller channels/ports, separate physical
> RAID volumes, and then partitions (or volumes) on the RAID drives.
> Based on this information, can anyone reflect from experience which of my
> choices above would work out the best for my SQL application?
> Thanks much,
> Mark
>
>
>|||John,
Thanks for the info.
Just to confirm, you believe I should go with the Raid 1, with OS and SQL
exe's on it.
Then I'd have a Raid 10 with all SQL database and log files (Master, temp, &
User databases and log files together)
Where best do my backup filea go? The Raid 1 or 10 Array? They will be
done nightly, when nearly no DB's are being used.)
Also, are there any free utilities I could use to load test the system and
track the I/O loads with the different RAID configs?
Thanks much,
Mark
"John Bell" wrote:
> Hi
> With the number of drives you have available then you are going to have to
> compromise in some way. Ideally you would separate OS, Pagefile, tempdb,
> system databases, user database data files, user database log file all onto
> separate sets of spindles or possibly multiple sets of spindles if you have
> multiple files for each. Where you compromise may be dependent on the
> application usage and things like the amount of memory installed/used. I
> would probably go for the Raid 1/Raid 10 configuration with data and log
> files together, but monitor how the system copes and see about moving things
> around to get the best you can.
> You may want to look at
> http://www.microsoft.com/technet/prodtechnol/sql/2000/maintain/sqlops6.mspx
> and
> http://www.microsoft.com/technet/prodtechnol/sql/2005/physdbstor.mspx
> John
> "Mrpush" wrote:
> > Hello,
> >
> > Have a SQL 2000 server, and need to make a final decision on my RAID setup
> > for best perfromance, cost, and drive usage.
> >
> > MY database application is of a front office/backoffice type, which means
> > accounting payroll etc is involved as well. Most activity from the front
> > end are reads, with occ writes. The back office during payroll processing
> > will generate a lot of writes in a short period of time. I only have 50
> > users using this application.
> >
> > I have A single processor dell server with a 1x6 backplane with 6 hot swap
> > bays. I currently have 3-36 GB drives to use.
> >
> > I have the option to split the backplane. My RAID choices are as follows:
> >
> > 1x6 backplane (I think all I/O's go through 1 channel/port/cable on
> > controller to all 6 drives in this 1x6 backplane setup)
> >
> > 2 - Raid 5 arrays (3 drives each)
> > OR
> > RAID 1 (2 HD's) and a RAID 10 (4 HD's) array (all through single
> > port/channel/cable)
> >
> >
> > IF I split the backplane to a 2x3 backplane (uses 2 channels/ports/cables on
> > the controller)
> >
> > 2 - RAID 5 arrays (3 HD's each)
> > OR
> > 1- RAID 1 array & 1- RAID 5 array on separate channels/ports (This leaves 1
> > bay open and disable my ability to use RAID 10)
> >
> > I have the OS, SQL exe's, SQL databases, Trancaction logs and tempdb logs,
> > and backups to deal with.
> >
> > I thought I read somewhere if the transaction log files are not on a RAID 1
> > volume by themselves, then it defeats the purpose. But I'm unsure if they
> > meant separate physical drives, or just separate volumes or partitions on the
> > same drives.
> >
> > So my issues are I/O's from controller channels/ports, separate physical
> > RAID volumes, and then partitions (or volumes) on the RAID drives.
> >
> > Based on this information, can anyone reflect from experience which of my
> > choices above would work out the best for my SQL application?
> >
> > Thanks much,
> >
> > Mark
> >
> >
> >
> >
> >|||Hi Mark
You could back up to the RAID 1 discs or possibly a network (UNC) drive. You
will have the option of moving either the log file from RAID 10 discs to the
RAID 1 disc and benchmarking the performance. If you have significant use of
the page file, you may want to see about adding more memory.
John
"Mrpush" wrote:
> John,
> Thanks for the info.
> Just to confirm, you believe I should go with the Raid 1, with OS and SQL
> exe's on it.
> Then I'd have a Raid 10 with all SQL database and log files (Master, temp, &
> User databases and log files together)
> Where best do my backup filea go? The Raid 1 or 10 Array? They will be
> done nightly, when nearly no DB's are being used.)
> Also, are there any free utilities I could use to load test the system and
> track the I/O loads with the different RAID configs?
> Thanks much,
> Mark
>
>
>
> "John Bell" wrote:
> > Hi
> >
> > With the number of drives you have available then you are going to have to
> > compromise in some way. Ideally you would separate OS, Pagefile, tempdb,
> > system databases, user database data files, user database log file all onto
> > separate sets of spindles or possibly multiple sets of spindles if you have
> > multiple files for each. Where you compromise may be dependent on the
> > application usage and things like the amount of memory installed/used. I
> > would probably go for the Raid 1/Raid 10 configuration with data and log
> > files together, but monitor how the system copes and see about moving things
> > around to get the best you can.
> >
> > You may want to look at
> > http://www.microsoft.com/technet/prodtechnol/sql/2000/maintain/sqlops6.mspx
> >
> > and
> >
> > http://www.microsoft.com/technet/prodtechnol/sql/2005/physdbstor.mspx
> >
> > John
> >
> > "Mrpush" wrote:
> >
> > > Hello,
> > >
> > > Have a SQL 2000 server, and need to make a final decision on my RAID setup
> > > for best perfromance, cost, and drive usage.
> > >
> > > MY database application is of a front office/backoffice type, which means
> > > accounting payroll etc is involved as well. Most activity from the front
> > > end are reads, with occ writes. The back office during payroll processing
> > > will generate a lot of writes in a short period of time. I only have 50
> > > users using this application.
> > >
> > > I have A single processor dell server with a 1x6 backplane with 6 hot swap
> > > bays. I currently have 3-36 GB drives to use.
> > >
> > > I have the option to split the backplane. My RAID choices are as follows:
> > >
> > > 1x6 backplane (I think all I/O's go through 1 channel/port/cable on
> > > controller to all 6 drives in this 1x6 backplane setup)
> > >
> > > 2 - Raid 5 arrays (3 drives each)
> > > OR
> > > RAID 1 (2 HD's) and a RAID 10 (4 HD's) array (all through single
> > > port/channel/cable)
> > >
> > >
> > > IF I split the backplane to a 2x3 backplane (uses 2 channels/ports/cables on
> > > the controller)
> > >
> > > 2 - RAID 5 arrays (3 HD's each)
> > > OR
> > > 1- RAID 1 array & 1- RAID 5 array on separate channels/ports (This leaves 1
> > > bay open and disable my ability to use RAID 10)
> > >
> > > I have the OS, SQL exe's, SQL databases, Trancaction logs and tempdb logs,
> > > and backups to deal with.
> > >
> > > I thought I read somewhere if the transaction log files are not on a RAID 1
> > > volume by themselves, then it defeats the purpose. But I'm unsure if they
> > > meant separate physical drives, or just separate volumes or partitions on the
> > > same drives.
> > >
> > > So my issues are I/O's from controller channels/ports, separate physical
> > > RAID volumes, and then partitions (or volumes) on the RAID drives.
> > >
> > > Based on this information, can anyone reflect from experience which of my
> > > choices above would work out the best for my SQL application?
> > >
> > > Thanks much,
> > >
> > > Mark
> > >
> > >
> > >
> > >
> > >|||Mrpush wrote:
> John,
> Thanks for the info.
> Just to confirm, you believe I should go with the Raid 1, with OS and SQL
> exe's on it.
> Then I'd have a Raid 10 with all SQL database and log files (Master, temp, &
> User databases and log files together)
> Where best do my backup filea go? The Raid 1 or 10 Array? They will be
> done nightly, when nearly no DB's are being used.)
> Also, are there any free utilities I could use to load test the system and
> track the I/O loads with the different RAID configs?
> Thanks much,
> Mark
>
>
Hi Mark
I'd rather go for a solution where you have your log and data files
separated. This is both from a performance and disaster recovery point
of view. If you have both log and data files on the same array/disks,
then if this array fails and you loose you files you've lost everything.
From a performance point of view it will in most case give better
performance to split log and data files to seperate arrays/drives and
best of all different RAID controller as well. For disaster recovery
reasons, I'd also try to put my backup files on a array that doesn't
contains any data or log files.
With your limited server configuration, I know it's limited how much you
can do, but I think I'd go for a RAID1 for your log files and a RAID 5
for your data files. If you don't do log backups though (if your
databases are in SIMPLE recovery), you could from a disaster recovery
point of view put both log and data files on the same array and then put
your backup on a seperate array. If you log/data array then fails and
you loose your files, it doesn't really matter that you loose both log
and data files since your only option will be to restore the last full
backup any way.
Regards
Steen Schlüter Persson
Databaseadministrator / Systemadministrator|||Hi
RAID10 will have better performance on writes and more better fault tolerance.
John
"Steen Persson (DK)" wrote:
> Mrpush wrote:
> > John,
> >
> > Thanks for the info.
> >
> > Just to confirm, you believe I should go with the Raid 1, with OS and SQL
> > exe's on it.
> >
> > Then I'd have a Raid 10 with all SQL database and log files (Master, temp, &
> > User databases and log files together)
> >
> > Where best do my backup filea go? The Raid 1 or 10 Array? They will be
> > done nightly, when nearly no DB's are being used.)
> >
> > Also, are there any free utilities I could use to load test the system and
> > track the I/O loads with the different RAID configs?
> >
> > Thanks much,
> >
> > Mark
> >
> >
> >
> Hi Mark
> I'd rather go for a solution where you have your log and data files
> separated. This is both from a performance and disaster recovery point
> of view. If you have both log and data files on the same array/disks,
> then if this array fails and you loose you files you've lost everything.
> From a performance point of view it will in most case give better
> performance to split log and data files to seperate arrays/drives and
> best of all different RAID controller as well. For disaster recovery
> reasons, I'd also try to put my backup files on a array that doesn't
> contains any data or log files.
> With your limited server configuration, I know it's limited how much you
> can do, but I think I'd go for a RAID1 for your log files and a RAID 5
> for your data files. If you don't do log backups though (if your
> databases are in SIMPLE recovery), you could from a disaster recovery
> point of view put both log and data files on the same array and then put
> your backup on a seperate array. If you log/data array then fails and
> you loose your files, it doesn't really matter that you loose both log
> and data files since your only option will be to restore the last full
> backup any way.
>
> --
> Regards
> Steen Schlüter Persson
> Databaseadministrator / Systemadministrator
>|||John Bell wrote:
> Hi
> RAID10 will have better performance on writes and more better fault tolerance.
> John
>
Agreed, but with only 6 drives available it might be difficult to use 4
of them for just one RAID10 array. Since the OP indicates that it's
mostly reads that are performed it might be better to create a RAID 5
array for the database file.
Regards
Steen Schlüter Persson
Databaseadministrator / Systemadministrator|||But then... if it is mainly reads the location of the log file is less of a
factor!!! :)
Unfortunately there is no absolute answer to this question, and too many
variables are still outstanding, Mark (the OP) would need to do some
benchmarking and load testing to find what is best for himself!!
John
"Steen Persson (DK)" wrote:
> John Bell wrote:
> > Hi
> >
> > RAID10 will have better performance on writes and more better fault tolerance.
> >
> > John
> >
> >
> Agreed, but with only 6 drives available it might be difficult to use 4
> of them for just one RAID10 array. Since the OP indicates that it's
> mostly reads that are performed it might be better to create a RAID 5
> array for the database file.
>
> --
> Regards
> Steen Schlüter Persson
> Databaseadministrator / Systemadministrator
>|||Guys,
Thanks for all the input.
In all actuallity, my take on the reads is an assumption really. I naeer
have bench marked it as its a new system but the general nature of the app is
more of just pulling data to view in 75% of the cases. The back end will
have much more writes however during payroll processing.
If I do the Raid 1/10, I'd have to purchase ~$1000 worth of drives. This
would give me 36GB for the raid 1 and 146GB for the Raid 10 array.
(in comparison, the Raid 1/5 would cost ~800 and give me an extra 70 GB
space).
The DB's are not huge, approx 2GB that will probably only grow ~.5 GB per
year.
Question, are there some free/cheap tools I can use to benchmark or load
test the server with reads and writes?
Thanks much,
Mark
"John Bell" wrote:
> But then... if it is mainly reads the location of the log file is less of a
> factor!!! :)
> Unfortunately there is no absolute answer to this question, and too many
> variables are still outstanding, Mark (the OP) would need to do some
> benchmarking and load testing to find what is best for himself!!
> John
> "Steen Persson (DK)" wrote:
> > John Bell wrote:
> > > Hi
> > >
> > > RAID10 will have better performance on writes and more better fault tolerance.
> > >
> > > John
> > >
> > >
> >
> > Agreed, but with only 6 drives available it might be difficult to use 4
> > of them for just one RAID10 array. Since the OP indicates that it's
> > mostly reads that are performed it might be better to create a RAID 5
> > array for the database file.
> >
> >
> > --
> > Regards
> > Steen Schlüter Persson
> > Databaseadministrator / Systemadministrator
> >|||Hi Mark
You can replay a trace using SQL profiler, this will depend on the original
workload and may not really be the best way of stress testing the disc
subsystem. Similarly if your applications are web based then you could
possibly use ACT http://tinyurl.com/lohup.
SQLIOStress is a free tool http://support.microsoft.com/kb/231619 that will
stress your disc subsystem, and there is also SQLIO http://tinyurl.com/fyt53.
Some of the other products you may want to look at are LoadRunner
http://www.mercury.com/us/products/performance-center/loadrunner/
or IBM Rational performance tester
http://www-306.ibm.com/software/awdtools/tester/performance/index.html or
Benchmark factory from Quest
http://www.quest.com/benchmark_factory/default.aspx. Y
John
"Mrpush" wrote:
> Guys,
> Thanks for all the input.
> In all actuallity, my take on the reads is an assumption really. I naeer
> have bench marked it as its a new system but the general nature of the app is
> more of just pulling data to view in 75% of the cases. The back end will
> have much more writes however during payroll processing.
> If I do the Raid 1/10, I'd have to purchase ~$1000 worth of drives. This
> would give me 36GB for the raid 1 and 146GB for the Raid 10 array.
> (in comparison, the Raid 1/5 would cost ~800 and give me an extra 70 GB
> space).
> The DB's are not huge, approx 2GB that will probably only grow ~.5 GB per
> year.
> Question, are there some free/cheap tools I can use to benchmark or load
> test the server with reads and writes?
> Thanks much,
> Mark
>
>
> "John Bell" wrote:
> > But then... if it is mainly reads the location of the log file is less of a
> > factor!!! :)
> >
> > Unfortunately there is no absolute answer to this question, and too many
> > variables are still outstanding, Mark (the OP) would need to do some
> > benchmarking and load testing to find what is best for himself!!
> >
> > John
> >
> > "Steen Persson (DK)" wrote:
> >
> > > John Bell wrote:
> > > > Hi
> > > >
> > > > RAID10 will have better performance on writes and more better fault tolerance.
> > > >
> > > > John
> > > >
> > > >
> > >
> > > Agreed, but with only 6 drives available it might be difficult to use 4
> > > of them for just one RAID10 array. Since the OP indicates that it's
> > > mostly reads that are performed it might be better to create a RAID 5
> > > array for the database file.
> > >
> > >
> > > --
> > > Regards
> > > Steen Schlüter Persson
> > > Databaseadministrator / Systemadministrator
> > >|||This is a multi-part message in MIME format.
--020401070701080401090805
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
Mrpush wrote:
> Guys,
> Thanks for all the input.
> In all actuallity, my take on the reads is an assumption really. I naeer
> have bench marked it as its a new system but the general nature of the app is
> more of just pulling data to view in 75% of the cases. The back end will
> have much more writes however during payroll processing.
> If I do the Raid 1/10, I'd have to purchase ~$1000 worth of drives. This
> would give me 36GB for the raid 1 and 146GB for the Raid 10 array.
> (in comparison, the Raid 1/5 would cost ~800 and give me an extra 70 GB
> space).
> The DB's are not huge, approx 2GB that will probably only grow ~.5 GB per
> year.
> Question, are there some free/cheap tools I can use to benchmark or load
> test the server with reads and writes?
> Thanks much,
> Mark
>
>
Hi Mark
Doing good performance tests are a bit difficult since it's hard to get
exactly the same load/working pattern that you have in real life
production. Some of the test tools and ideas John has suggested will get
you started though.
We have just bought a new SAN (IBM DS4300 with 3 extra bays which means
56 disks in total) and I've had the change to test this in various
configurations. From previous performance monitoring on our existing
server/SAN, we know that we have a high number of reads. I think it's
something like 95% READS and 5 % writes - or maybe even bigger difference.
I've only focused on the data file because the log file will be on a
RAID1/10 but I've tested the performance with the data file on RAID 5
and RAID 1/10 array.
I'm not done with the testing and the conclusion yet, but my first
impression is that RAID5 still gives the best read performance (that was
also what I expected). I've set a maximum of 28 disks that I can use for
my data array and I've tested with 28 disks in RAID 5, 28 disks in
RAID1/10 and with 14 disks in RAID5. Even with only 14 disks in RAID5, I
got a slightly better read performance than with RAID1/10.
The write performance is of course better on RAID1/10, but actually the
difference seems smaller than I'd have expected.
It's not a very "scientific" test I've done, so there can be a lot of
factors that I haven't countered for. Also the purpose of the test
wasn't to see a generel difference between RAID5 and RAID1/10, but only
to see the difference on this specific SAN with it's specific
configuration. This means that there can very well be controller
settings, cache settings etc. that "masks" the result and maybe levels
out the difference between the various RAID configurations. In the IBM
storage manager, they also treats RAID1 and RAID10 as the same so I'm
not quite sure that it creates a true RAID10 array. I've talked to IBM
about this, but I'm still waiting on a reply on this (the guy I should
talk to is on holiday). This little "twist" can of course also make a
difference in the performance of the different RAIDs.
Regards
Steen Schlüter Persson
Databaseadministrator / Systemadministrator
--020401070701080401090805
Content-Type: text/html; charset=UTF-8
Content-Transfer-Encoding: 8bit
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN">
<html>
<head>
<meta content="text/html;charset=UTF-8" http-equiv="Content-Type">
</head>
<body bgcolor="#ffffff" text="#000000">
Mrpush wrote:
<blockquote cite="mid4080BB3E-3E98-4785-A67A-5C5F4357A062@.microsoft.com"
type="cite">
<pre wrap="">Guys,
Thanks for all the input.
In all actuallity, my take on the reads is an assumption really. I naeer
have bench marked it as its a new system but the general nature of the app is
more of just pulling data to view in 75% of the cases. The back end will
have much more writes however during payroll processing.
If I do the Raid 1/10, I'd have to purchase ~$1000 worth of drives. This
would give me 36GB for the raid 1 and 146GB for the Raid 10 array.
(in comparison, the Raid 1/5 would cost ~800 and give me an extra 70 GB
space).
The DB's are not huge, approx 2GB that will probably only grow ~.5 GB per
year.
Question, are there some free/cheap tools I can use to benchmark or load
test the server with reads and writes?
Thanks much,
Mark
</pre>
</blockquote>
<small><font face="Arial">Hi Mark<br>
<br>
Doing good performance tests are a bit difficult since it's hard to get
exactly the same load/working pattern that you have in real life
production. Some of the test tools and ideas John has suggested will
get you started though.<br>
We have just bought a new SAN (IBM DS4300 with 3 extra bays which means
56 disks in total) and I've had the change to test this in various
configurations. From previous performance monitoring on our existing
server/SAN, we know that we have a high number of reads. I think it's
something like 95% READS and 5 % writes - or maybe even bigger
difference. <br>
I've only focused on the data file because the log file will be on a
RAID1/10 but I've tested the performance with the data file on RAID 5
and RAID 1/10 array. <br>
<br>
I'm not done with the testing and the conclusion yet, but my first
impression is that RAID5 still gives the best read performance (that
was also what I expected). I've set a maximum of 28 disks that I can
use for my data array and I've tested with 28 disks in RAID 5, 28 disks
in RAID1/10 and with 14 disks in RAID5. Even with only 14 disks in
RAID5, I got a slightly better read performance than with RAID1/10.<br>
The write performance is of course better on RAID1/10, but actually the
difference seems smaller than I'd have expected.<br>
<br>
It's not a very "scientific" test I've done, so there can be a lot of
factors that I haven't countered for. Also the purpose of the test
wasn't to see a generel difference between RAID5 and RAID1/10, but only
to see the difference on this specific SAN with it's specific
configuration. This means that there can very well be controller
settings, cache settings etc. that "masks" the result and maybe levels
out the difference between the various RAID configurations. In the IBM
storage manager, they also treats RAID1 and RAID10 as the same so I'm
not quite sure that it creates a true RAID10 array. I've talked to IBM
about this, but I'm still waiting on a reply on this (the guy I should
talk to is on holiday). This little "twist" can of course also make a
difference in the performance of the different RAIDs.<br>
<br>
<br>
-- <br>
Regards<br>
Steen Schlüter Persson<br>
Databaseadministrator / Systemadministrator<br>
</font></small>
</body>
</html>
--020401070701080401090805--

Friday, March 23, 2012

Help with security setup on SQL

Hi - I have a simple database (sql 2000) on a dedicated server - I only,
at the moment, use SPs and tables. All of these currently have the
owner set to DBO. Is this ok?
When using .net to allow people to access the database via the web,
should I first setup a User within sql server, and then amend my
connection string in the .config file to use that user only (and
reserver the SA login for myself - as I need to administer the database
via the web too).
What do I need to be careful of when setting permissions for users? eg.
the users will need to be able to add/amend to many tables, and to be
able to run the SPs. In some tables, they will also have to be able to
run delete queries from the DB.
Are there any 'idiots' guides to this to help me get started?
Thanks for any help,
Mark
*** Sent via Developersdex http://www.codecomments.com ***
Don't just participate in USENET...get rewarded for it!I would start with BooksOnLine. There is a lot of good information on
Security that should get you going in the right direction.
security-SQL Server, overview
Andrew J. Kelly SQL MVP
"Mark" <anonymous@.devdex.com> wrote in message
news:Oiht6wAuEHA.2624@.TK2MSFTNGP11.phx.gbl...
> Hi - I have a simple database (sql 2000) on a dedicated server - I only,
> at the moment, use SPs and tables. All of these currently have the
> owner set to DBO. Is this ok?
> When using .net to allow people to access the database via the web,
> should I first setup a User within sql server, and then amend my
> connection string in the .config file to use that user only (and
> reserver the SA login for myself - as I need to administer the database
> via the web too).
> What do I need to be careful of when setting permissions for users? eg.
> the users will need to be able to add/amend to many tables, and to be
> able to run the SPs. In some tables, they will also have to be able to
> run delete queries from the DB.
> Are there any 'idiots' guides to this to help me get started?
> Thanks for any help,
> Mark
> *** Sent via Developersdex http://www.codecomments.com ***
> Don't just participate in USENET...get rewarded for it!|||Mark
In addition to Andrew's advice I'd also recommend you to read some stuff
about SQL Server injection which may hurt your SQL Server database.
http://www.dbazine.com/cook8.shtml
"Andrew J. Kelly" <sqlmvpnooospam@.shadhawk.com> wrote in message
news:O$FaeoDuEHA.1400@.TK2MSFTNGP11.phx.gbl...
> I would start with BooksOnLine. There is a lot of good information on
> Security that should get you going in the right direction.
> security-SQL Server, overview
>
> --
> Andrew J. Kelly SQL MVP
>
> "Mark" <anonymous@.devdex.com> wrote in message
> news:Oiht6wAuEHA.2624@.TK2MSFTNGP11.phx.gbl...
>

Help with Security Model

Hi all,

I need some sugestions from all of you about setting up security model in
our SQL2000 box.

The server was setup using Mixed mode. However, all the applications
(web and MS access) access the server using "sa" userid.

There are several databases in our server. Ex: (DB1,DB2,DB3,DB4 and DB5)

Application 1: need read/write access to DB1,DB2 and DB3
Application 2: need read/write access to DB5
Application 3: need read/write access to DB4 and DB3

Should I set up three userids and give them the dbo access to those
database that they need to use?

Does that make any sense to you?

Thank you for all your suggestionThe 'sa' account should NEVER be used by ANY application or user (other than the DBA, and then carefully.)

Use three different UID/Pwd for the applications and GRANT permissions that way.|||In addition, unless they really NEED dbo, just GRANT the minimum they need.|||I'm a firm believer in the "principle of least privlege", meaning you create as many accounts as you need and only give each one the privleges that it needs to do its job. Check out the predefined database roles (http://msdn.microsoft.com/library/default.asp?url=/library/en-us/tsqlref/ts_sp_help_5omd.asp) to see if your accounts even need dbo, or if a combination of db_datareader and db_datawriter would do.

-PatP|||I take it a bit further than data_reader/writer and grant Select/Insert/Update/Delete permissions as required. For instance, a financial application allows Inserts but NO updates. Once a record has been inserted, that's the way it stays. Corrections are made by inserting another record to adjust the transaction (along with an explanation.) Auditors seem to prefer this for following the money trail. Good thing I'm sa... :)|||Thank you for all your suggestions.

Sunday, February 19, 2012

Help with installing/setup of SQL Server 2000 Trial

I've installed SQL Server 2000 trial on a standalone XP Pro machine as a local server, however the debugger does not work correctly. I will not allow me to step into the code of stored procedures. Can anyone provide me assistance in resolving this.

Before we go into the details, the looming question is:

"Why have you chosed SQL 2000 to eval, rather than SQL 2005 -that seems odd?"

|||Arnie, because I am a student at University of Cincinnati and that is what my database programming class is based on. I wanted to use the same version, get ahead in class and learn more than what the class will be covering. Also evaluating SQL Server will hopefully help me decide if I want to persue a career on the database side of things versus more of a visual studio or .net concentration.|||

More the reason to get a copy of SQL Server 2005. Most job opportunities (by the time you are looking) will be concerned with SQL 2005 skills. You will be able to accomplish just about everything using SQL 2005 Express that your class will be doing with SQL 2000. It's too bad that education instutitions are so moribund and change so slowly, When you finish, you will know a lot about a deprecated technology. I highly recommend learning SQL 2005.

Download SQL 2005 Express (free), or for about $50 you can buy the Developer's Edition of SQL Server (equilivent to the Enterprise Edition)

SQL Express is available here: http://msdn2.microsoft.com/en-us/sql/aa336346.aspx

SQL Server 2005 Express Books Online Express Edition
http://msdn2.microsoft.com/en-us/library/ms165706.aspx

SQL Server 2005 Express Edition (Advanced/SSMS/BI Toolkit)
http://tinyurl.com/yelwr9 (SSMSE)
http://tinyurl.com/ovcx3(Advanced Services)
http://tinyurl.com/23hg7n (BI Toolkit)
http://msdn.microsoft.com/vstudio/express/sql/compare/default.aspx
http://msdn2.microsoft.com/en-us/library/ms365247.aspx

SQL Server 2005 Express –Installation Details
http://msdn2.microsoft.com/en-us/library/ms143441.aspx

SQL Server 2005 Express Overview
http://msdn2.microsoft.com/en-us/library/ms345154.aspx
http://www.pcw.co.uk/personal-computer-world/software/2155087/review-microsoft-sql-server

SQL Server 2005 Express System Requirements
http://www.microsoft.com/sql/editions/express/sysreqs.mspx

|||I certainly appreciate your advice, however if you could see fit to help me with the Trial 2000 problem, I'd appreciate that. and if you choose not to, I understand and am thankful for your opinion|||

One of the reasons I was trying to direct you away from SQL 2000, is that the 'Debugger' is somewhat tempermental and problematic. To paraphase a recent series of drug and alcohol ads, most of us 'Just say No!'

Are you attempting to access the Debugger by right clicking on the stored procedure name in the Object Explorer?

And if so, what errors or other indicators are you getting that it is not working?

|||

Arnie, I wish I could just say no, however I don't have any spare cycles to deal with taking something I've done in 2005 to UC's 2000. I've got a rather heavy class load this quarter. I will take your advice and learn 2005 after this quarter's over.

Yes I'm accessing the debugger by right clicking on the name of the stored procedure from the object explorer. And what it does is open the debugger then the code executes immediately upon the debugger window dispalying without allowing me to start or step into the code.

Is there another way to start the debugger for a store procedure?

|||Could you post the procedure code that you are attempting to debug?|||yes i can, do I just copy and paste it to a post message?|||

Here is the code

SET QUOTED_IDENTIFIER ON
GO
SET ANSI_NULLS OFF
GO


CREATE PROCEDURE spPFW_Load_Financial_Reporting_Data
AS
-
-- spPFW_Load_Financial_Reporting_Data
--
-- Is the main procedure that executes and controls the loading of Monthly PFW data into the tblPFW_Account_Header and
-- tblPFW_Financial_Reporting_Data tables.
--
-- It retrieves values from the tblPFW_Report_Line_Master, tblPFW_Account_Header & tblPFW_Monthly_Load_Data tables in
-- order to create new rows in both the tblPFW_Account_Header and tblPFW_Financial_Reporting_Data
--
-- It may execute the following store procedures if necessary:
-- master.dbo.xp_cmdshell
-- 1. to determine if there is a PFWDataLoad.txt file in the Load folder to process
-- 2. to execute the DTS PFW Data Load package that populates the tblPFW_Monthly_Load_Data from the PFWDataLoad.txt file
-- 3. to copy the PFWDataLoad.txt file to the Load History folder at end successful processing
-- 4. to delete the PFWDataLoad.txt file from Load folder at end successful processing
-- spPFW_Create_tblPFW_Data_Load_Message_Log to create the message log if it does not exist
-- spPFW_Write_Data_Load_Message to log load processing and/or error messages to tblPFW_Load_Data_Message_Log
-- spPFW_Create_tblPFW_Account_Header to create the Account Header table if it does not exist
-- spPFW_Create_tblPFW_Financial_Reporting_Data to create the Financial Reporting Data table if it does not exist
-- spPFW_Reset_tblPFW_Monthly_Load_Data to create an empty table for the DTS xxxxxx package to populate
-- spPFW_Update_Account_Header to determine if any new accounts were added to PFW & creates and inserts new Account
-- Header rows if necessary.
-- spPFW_Insert_Financial_Reporting_Data to insert all new Financial Reporting Data rows
--
-
--
-- The procedure first checks if a PFWDataLoad.txt file exists in the L:\Home\PFWtoNoetix\Load folder. If it does not,
-- there is nothing to process & it returns 0.
--
-- If a PFWDataLoad.txt file exists, The procedure either executues or does not execute the following 17 processing
-- steps based on @.LOAD_Status set in a previous step. It logs processing and/or error messages as it proceeds.
-- The first 7 steps check if the required tables and stored procedures exists. In some case, it will create the
-- non-existant tables. Step 8 drops & re-creates a new tblPFW_Monthly_Load_Data table. Step 9 runs the DTS PAckage
-- xxxxx to load it from the PFWDataLoad.txt file. Step 10 checks if the data being load is already present in the
-- tblPFW_Financial_Report_Data table. The steps 11 thru 15 are executed if the data has not been previously loaded.
-- Step 11 updates the tblPFW_Account_Header table with any new accounts found in the tblPFW_Monthly_Load_Data table.
-- Steps 12 thru 15 insert rows into the tblPFW_Financial_Report_Data for the new months data. Step 16 moves and renames
-- PFWDataLoad.txt to L:\Home\PFWtoNoetix\Load_History folder named as PFWDataLoad.YYYY.MM.txt with YYYY & MM equaling
-- the PFW_Year & PFW_Month just processed.
--
-- Step 17 finishes up the load process.
-- It only commits log messages to the tblPFW_Load_Data_Message_Log if a processing error occurs. All inserts
-- into the tblPFW_Account_Header & tblPFW_Financial_Report_Data tables were previously rolled back.
--
-- If no errors occur, then all inserts into the tblPFW_Load_Data_Message_Log, tblPFW_Account_Header &
-- tblPFW_Financial_Report_Data tables are committed to end the process.
--
-
--
-- The procedure does not update values in either the tblPFW_Account_Header or tblPFW_Financial_Report_Data tables, it
-- only inserts a new month of PFW financial data.
--
-
--
-- Load Processing Steps executed if a PFWDataLoad.txt file exists
-- 1. Check if tblPFW_Load_Data_Message_Log exists, if it does continue to step 2
-- If it does not exist, check if spPFW_Create_tblPFW_Data_Load_Message_Log exists, if that does not, abort load process
-- If spPFW_Create_tblPFW_Data_Load_Message_Log exists, create tblPFW_Data_Load_Message_Log
-- If tblPFW_Data_Load_Message_Log did not get created, abort the load process
--
-- 2. Check if spPFW_Reset_tblPFW_Monthly_Load_Data exists, if it does continue to step 3
-- If it does not exist, it's a hard stop, abort load process
--
-- 3. Check if spPFW_Update_Account_Header exists, if it does continue to step 4
-- If it does not exist, it's a hard stop, abort load process
--
-- 4. Check if spPFW_Load_Financial_Reporting_Data exists, if it does continue to step 5
-- If it does not exist, it's a hard stop, abort load process
--
-- 5. Check if tblPFW_Report_Line_Master exists
-- If it does exist, check it to make sure it's not empty, if it's empty abort the load process, else continue to step 6
-- If it does not exist, it's a hard stop, abort load process
--
-- 6. Check if tblPFW_Account_Header exists, if it does continue to step 7
-- If it does not exist, check if spPFW_Create_tblPFW_Account_Header exists, if it does not, abort load process
-- If spPFW_Create_tblPFW_Account_Header exists, create tblPFW_Account_Header
-- If tblPFW_Account_Header did not get created, abort the load process
--
-- 7. Check if tblPFW_Financial_Reporting_Data exists, if it does continue to step 8
-- If it does not exist, check if spPFW_Create_tblPFW_Financial_Reporting_Data exists, if it does not, abort load process
-- If spPFW_Create_tblPFW_AFinancial_Reporting_Data exists, create tblPFW_Financial_Reporting_Data
-- If tblPFW_Financial_Reporting_Data did not get created, abort the load process
--
-- 8. Drop existing tblPFW_Monthly_Load_Data, then recreate it
-- If it gets created, continue to step 9
-- If it does not get created, abort the load process
--
-- 9. Run DTS Package PFW Data Load to Load PFWDataLoad.txt file into tblPFW_Monthly_Load_Data
-- If it gets load successfully, continue to step 10
-- If it does not get loaded successfully, abort the load process
--
-- 10. Check if this month has already been added to tblPFW_Financial_Reporting_Data
-- If its already in tblPFW_Financial_Reporting_Data, abort load process, else continue to step 11
--
-- 11. Update Account Header table
-- If an error occured inserting Account Header rows, execute rollback, & abort load process
--
-- 12. Insert Balance Sheet - Account & Company Level rows into tblPFW_Financial_Reporting_Data
-- If an error occured while inserting Balance Sheet - Account or Company Level rows, execute rollback, & abort load process
--
-- 13. Insert Income Statement - Account Level rows into tblPFW_Financial_Reporting_Data
-- If an error occured while inserting Income Statement - Account Level rows, execute rollback, & abort load process
--
-- 14. Insert Income Statement - Company Level rows into tblPFW_Financial_Reporting_Data
-- If an error occured while inserting Income Statement - Company Level rows, execute rollback, & abort load process
--
-- 15. Insert Income Statement - BGI Consolidated Level rows into tblPFW_Financial_Reporting_Data
-- If an error occured while inserting Income Statement - BGI Consolidated Level rows, execute rollback, & abort load process
--
-- 16. Moves & renames PFWDataLoad.txt file to Load History folder as PFWDataLoad.YYYY.MM.txt
--
-- 17. Complete the Load Process
-- Log either Successfully Load Process or Error Occurred message
-- Commit any message rows and/or Financial Reporting Data Rows
--
-
-- Revisions
-
-- 1. Revision: Initial creation
-- Date: 3/20/2007
-- By: Greg Ouellette
-

-- Declare general use variables
DECLARE @.test int
DECLARE @.syserror int
DECLARE @.id int
DECLARE @.SP_Return int
DECLARE @.Report_Line_Master_Count int

DECLARE @.Load_Status int
DECLARE @.Load_PFW_Year int
DECLARE @.Load_PFW_Month int
DECLARE @.Load_YYYYMM int
DECLARE @.MoveCmd varchar (250)


SET @.Load_Status = 0
SET @.Load_PFW_Year = 9999
SET @.Load_PFW_Month = 99

EXEC @.Load_Status = master.dbo.xp_cmdshell 'dir L:\Home\PFW\Load\PFWDataLoad.txt', NO_OUTPUT
IF (@.Load_Status <> 0) -- There is no new PFWDataLoad.txt file to load
Return 0

--
-- Check if all the tables and stored procedures exist before loading the new PFWDataLoad.txt file
--

IF @.Load_Status = 0
-- 1. Check if tblPFW_Load_Data_Message_Log exists, if it does continue to step 2
-- If it does not exist, check if spPFW_Create_tblPFW_Data_Load_Message_Log exists, if it does not, abort load process
-- If spPFW_Create_tblPFW_Data_Load_Message_Log exists, create tblPFW_Data_Load_Message_Log
-- If tblPFW_Data_Load_Message_Log did not get created, abort the load process
BEGIN
SET @.id = null
SELECT @.id = id FROM dbo.sysobjects WHERE id = (object_id(N'[dbo].[tblPFW_Data_Load_Message_Log]'))
IF @.id = null
BEGIN
SELECT @.id = id FROM dbo.sysobjects WHERE id = (object_id(N'[dbo].[spPFW_Create_tblPFW_Data_Load_Message_Log]'))
IF @.id = null
SET @.Load_Status = -1
ELSE
BEGIN
EXECUTE @.sp_Return = spPFW_Create_tblPFW_Data_Load_Message_Log @.syserror
IF @.sp_Return = 0 -- tblPFW_Load_Log created successfully, log msg
BEGIN
EXECUTE @.sp_Return = spPFW_Write_Data_Load_Message @.Load_Status, @.Load_PFW_Year, @.Load_PFW_Month, 'Load MSG001A'
IF @.sp_Return <> 0
SET @.Load_Status = -1 -- Write message failed
END
ELSE -- tblPFW_Load_Log was not created, can not insert message, it's a hard stop, abort load process
SET @.Load_Status = -1
END
END
END

IF @.Load_Status = 0 -- Log Load Process Started msg
BEGIN
EXECUTE @.sp_Return = spPFW_Write_Data_Load_Message @.Load_Status, @.Load_PFW_Year, @.Load_PFW_Month, 'Load MSG0000'
IF @.sp_Return <> 0
SET @.Load_Status = -1 -- Write message failed
END


IF @.Load_Status = 0
-- 2. Check if spPFW_Reset_tblPFW_Monthly_Load_Data exists
-- If it does not exist, it's a hard stop, abort load process
BEGIN
SET @.id = null
SELECT @.id = id FROM dbo.sysobjects WHERE id = (object_id(N'[dbo].[spPFW_Reset_tblPFW_Monthly_Load_Data]'))
IF @.id = null /* spPFW_Reset_Monthly_Load_Data does not exist, log msg, set @.Load_Status to abort load process */
BEGIN
SET @.Load_Status = -2
EXECUTE spPFW_Write_Data_Load_Message @.Load_Status, @.Load_PFW_Year, @.Load_PFW_Month, 'Load MSG002A'
END
END

IF @.Load_Status = 0
-- 3. Check if spPFW_Update_Account_Header exists
-- If it does not exist, it's a hard stop, abort load process
BEGIN
SET @.id = null
SELECT @.id = id FROM dbo.sysobjects WHERE id = (object_id(N'[dbo].[spPFW_Update_Account_Header]'))
IF @.id = null /* spPFW_Update_Account_Header does not exist, log msg, set @.Load_Status to abort load process */
BEGIN
SET @.Load_Status = -3

EXECUTE spPFW_Write_Data_Load_Message @.Load_Status, @.Load_PFW_Year, @.Load_PFW_Month, 'Load MSG003A'
END
END

IF @.Load_Status = 0
-- 4. Check if spPFW_Insert_Financial_Reporting_Data exists
-- If it does not exist, it's a hard stop, abort load process
BEGIN
SET @.id = null
SELECT @.id = id FROM dbo.sysobjects WHERE id = (object_id(N'[dbo].[spPFW_Insert_Financial_Reporting_Data]'))
IF @.id = null /* spPFW_Create_tblPFW_Monthly_Load_Data does not exist, log msg, set @.Load_Status to abort load process */
BEGIN
SET @.Load_Status = -4
EXECUTE spPFW_Write_Data_Load_Message @.Load_Status, @.Load_PFW_Year, @.Load_PFW_Month, 'Load MSG004A'
END
END

IF @.Load_Status = 0
-- 5. Check if tblPFW_Report_Line_Master exists
-- If it does not exist, it's a hard stop, abort load process, else check it to make sure it's not empty
BEGIN
SET @.id = null
SELECT @.id = id FROM dbo.sysobjects WHERE id = (object_id(N'[dbo].[tblPFW_Report_Line_Master]'))
IF @.id = null /* tblPFW_Report_Line_Master does not exist, log message, set load status to abort */
BEGIN
SET @.Load_Status = -5
EXECUTE spPFW_Write_Data_Load_Message @.Load_Status, @.Load_PFW_Year, @.Load_PFW_Month, 'Load MSG005A'
END
ELSE
BEGIN
SELECT @.Report_Line_Master_Count = COUNT(DISTINCT [Report Line]) FROM dbo.tblPFW_Report_Line_Master
IF @.Report_Line_Master_Count = 0 -- tblPFW_Report_Line_Master exists, however no rows in table, log message, set load status to abort
BEGIN
SET @.Load_Status = -5
EXECUTE spPFW_Write_Data_Load_Message @.Load_Status, @.Load_PFW_Year, @.Load_PFW_Month, 'Load MSG005B'
END
END
END

IF @.Load_Status = 0
-- 6. Check if tblPFW_Account_Header exists, if it does continue next step
-- If it does not exist, check if spPFW_Create_tblPFW_Account_Header exists, if it does not, abort load process
-- If spPFW_Create_tblPFW_Account_Header exists, create tblPFW_Account_Header
-- If tblPFW_Account_Header did not get created, abort the load process
BEGIN
SET @.id = null
SELECT @.id = id FROM dbo.sysobjects WHERE id = (object_id(N'[dbo].[tblPFW_Account_Header]'))
IF @.id = null /* tblPFW_Account_Header does not exist, create it */
BEGIN
SELECT @.id = id FROM dbo.sysobjects WHERE (id = object_id(N'[dbo].[spPFW_Create_tblPFW_Account_Header]'))
IF @.id = null /* spPFW_Create_tblPFW_Account_Header does not exist, abort load process */
BEGIN
SET @.Load_Status = -6
EXECUTE spPFW_Write_Data_Load_Message @.Load_Status, @.Load_PFW_Year, @.Load_PFW_Month, 'Load MSG006A'
END
ELSE -- spPFW_Create_tblPFW_Account_Header exists, create tblPFW_Account_Header
BEGIN
EXECUTE @.SP_Return = spPFW_Create_tblPFW_Account_Header @.syserror
IF @.SP_Return <> 0 -- Error creating tblPFW_Account_Header, log msg, set Load Status to abort Load Process
BEGIN
IF @.syserror <> 0 -- log the @.@.error and sysmessages description
EXECUTE spPFW_Write_Data_Load_Message @.syserror, @.Load_PFW_Year, @.Load_PFW_Month, 'sysmsg'
SET @.Load_Status = -6
EXECUTE spPFW_Write_Data_Load_Message @.Load_Status, @.Load_PFW_Year, @.Load_PFW_Month, 'Load MSG006B'
END
ELSE -- tblPFW_Account_Header created, log msg,
BEGIN
EXECUTE @.SP_Return = spPFW_Write_Data_Load_Message @.Load_Status, @.Load_PFW_Year, @.Load_PFW_Month, 'Load MSG006C'
IF @.SP_Return <> 0 -- Write message failed
SET @.Load_Status = -6
END
END
END
END

IF @.Load_Status = 0
-- 7. Check if tblPFW_Financial_Reporting_Data exists, if it does continue next step
-- If it does not exist, check if spPFW_Create_tblPFW_Financial_Reporting_Data exists, if it does not, abort load process
-- If spPFW_Create_tblPFW_Financial_Reporting_Data exists, create tblPFW_Financial_Reporting_Data
-- If tblPFW_Financial_Reporting_Data did not get created, abort the load process
BEGIN
SET @.id = null
SELECT @.id = id FROM dbo.sysobjects WHERE id = (object_id(N'[dbo].[tblPFW_Financial_Reporting_Data]'))
IF @.id = null /* tblPFW_Financial_Reporting_Data does not exist */
BEGIN
SELECT @.id = id FROM dbo.sysobjects WHERE id = (object_id(N'[dbo].[spPFW_Create_tblPFW_Financial_Reporting_Data]'))
IF @.id = null /* spPFW_Create_tblPFW_Financial_Reporting_Data does not exist, log msg, set @.Load_Status to abort load process */
BEGIN
SET @.Load_Status = -7
EXECUTE spPFW_Write_Data_Load_Message @.Load_Status, @.Load_PFW_Year, @.Load_PFW_Month, 'Load MSG007A'
END
ELSE -- spPFW_Create_tblPFW_Financial_Reporting_Data does exist, create tblPFW_Financial_Reporting_Data
BEGIN
EXECUTE @.SP_Return = spPFW_Create_tblPFW_Financial_Reporting_Data @.syserror
IF @.SP_Return <> 0 -- Error creating tblPFW_Financial_Reporting_Data, log msg, set Load Status to abort Load Process
BEGIN
IF @.syserror <> 0 -- log the @.@.error and sysmessages description
EXECUTE spPFW_Write_Data_Load_Message @.syserror, @.Load_PFW_Year, @.Load_PFW_Month, 'sysmsg'
SET @.Load_Status = -7
EXECUTE spPFW_Write_Data_Load_Message @.Load_Status, @.Load_PFW_Year, @.Load_PFW_Month, 'Load MSG007B'
END
ELSE -- tblPFW_Financial_Reporting_Data created, log msg
BEGIN
EXECUTE @.SP_Return = spPFW_Write_Data_Load_Message @.Load_Status, @.Load_PFW_Year, @.Load_PFW_Month, 'Load MSG007C'
IF @.SP_Return <> 0 -- Write message failed
SET @.Load_Status = -7
END
END
END
END

--
-- If all the tables and stored procedures exist, then load tblPFW_Monthly_Load_Data from PFWDataLoad.txt file
--

IF @.Load_Status = 0
-- 8. Reset tblPFW_Monthly_Load_Data so new month's data can be loaded
BEGIN
EXECUTE @.SP_Return = spPFW_Reset_tblPFW_Monthly_Load_Data @.syserror
IF (@.SP_Return <> 0) -- Error resetting tblPFW_Monthly_Load_Data, log msg, set status to abort load process
BEGIN
IF @.syserror <> 0 -- log the @.@.error and sysmessages description
EXECUTE spPFW_Write_Data_Load_Message @.syserror, @.Load_PFW_Year, @.Load_PFW_Month, 'sysmsg'
SET @.Load_Status = -8
EXECUTE spPFW_Write_Data_Load_Message @.Load_Status, @.Load_PFW_Year, @.Load_PFW_Month, 'Load MSG008A'
END
ELSE -- tblPFW_Monthly_Load_Data was reset successfully, log msg
BEGIN
EXECUTE @.SP_Return = spPFW_Write_Data_Load_Message @.Load_Status, @.Load_PFW_Year, @.Load_PFW_Month, 'Load MSG008B'
IF @.SP_Return <> 0 -- Write message failed
SET @.Load_Status = -8
END
END

IF @.Load_Status = 0
-- 9. Run DTS Package to Load PFWDataLoad.txt file into tblPFW_Monthly_Load_Data
--call DTSRun to execute it (via xp_cmdshell), etc. Hope this helps.
--dtsrun /Sserver_name /Uuser_nName /Ppassword /Npackage_name /Mpackage_password

--dtsrun /Ffilename /Npackage_name /Mpackage_password
BEGIN
EXECUTE @.SP_Return = master.dbo.xp_cmdshell 'DTSRun /SBLUEMOON\BIGO /UBigO /P74654o /NPFW Data Load /M74654o'
--EXEC @.SP_Return = master.dbo.xp_cmdshell 'DTSRun /DTSRun /~Z0x2F3FF8E9164A7A7241D0849BD24CA7247E86CB4EECC129B5E26FD37D413B9B2ADBDE9C1D917AFF317DA388EF1B0A2FB233E9E765955B37B1BEE4CE9BA73460D04C312F8FFD2B8127516D0D5F546781F994DBE96BFA75E30E2D313C194341153CF1B3D539E1FEE5A4B28AD6A300303D06AFEBF82ABBA7AAB5B7F278F5984E3C44F9374B'
IF (@.SP_Return <> 0) -- Load of tblPFW_Monthly_Load_Data from PFWDataLoad.txt failed, log msg, set load status to abort process
BEGIN
SET @.Load_Status = -9
EXECUTE spPFW_Write_Data_Load_Message @.Load_Status, @.Load_PFW_Year, @.Load_PFW_Month, 'Load MSG009A'
END
ELSE -- tblPFW_Monthly_Load_Data was created successfully, log msg
BEGIN
EXECUTE @.SP_Return = spPFW_Write_Data_Load_Message @.Load_Status, @.Load_PFW_Year, @.Load_PFW_Month, 'Load MSG009B'
IF @.SP_Return <> 0 -- Write message failed
SET @.Load_Status = -9
END
END

IF @.Load_Status = 0
-- 10. Load of tblPFW_Monthly_Load_Data successful, get PFW Year & PFW Month, check if this month has already been added to
-- tblPFW_Financial_Reporting_Data, if so, log msg & set load status to abort load process
BEGIN
SELECT @.Load_PFW_Year = MAX(DISTINCT [PFW Year]) FROM dbo.tblPFW_Monthly_Load_Data
SELECT @.Load_PFW_Month = MAX(DISTINCT [PFW Month]) FROM dbo.tblPFW_Monthly_Load_Data
SELECT @.Load_YYYYMM = MAX(DISTINCT YYYYMM) FROM dbo.tblPFW_Financial_Reporting_Data WHERE ([PFW Year] = @.Load_PFW_Year) AND ([PFW Month] = @.Load_PFW_Month)
IF @.Load_YYYYMM <> null /* Monthly data has already been loaded, log MSG & set load status */
BEGIN
SET @.Load_Status = -10
EXECUTE spPFW_Write_Data_Load_Message @.Load_Status, @.Load_PFW_Year, @.Load_PFW_Month, 'Load MSG010A'
END
END
COMMIT -- All log messages to this point


--
-- If all the tables and stored procedures exist, and tblPFW_Monthly_Load_Data was loaded successfully, then determine if there have been
-- any accounts added to PFW during the month and update tblPFW_Account_Header if so, then create & insert the new PFW Year & PFW Month rows
-- into tblPFW_Financial_Reporting_Data
--

IF @.Load_Status = 0
-- 11. Update Account Header table
BEGIN
EXECUTE @.SP_Return = spPFW_Update_Account_Header @.syserror
IF @.SP_Return <> 0 -- Error occured inserting Account Header rows, execute rollback, log msg & set load status
BEGIN
ROLLBACK TRANSACTION
IF @.syserror <> 0 -- log the @.@.error and sysmessages description
EXECUTE spPFW_Write_Data_Load_Message @.syserror, @.Load_PFW_Year, @.Load_PFW_Month, 'sysmsg'
SET @.Load_Status = -11
EXECUTE spPFW_Write_Data_Load_Message @.Load_Status, @.Load_PFW_Year, @.Load_PFW_Month, 'Load MSG0011A'
END
ELSE -- Account Header table was updated successfully log msg
BEGIN
EXECUTE @.SP_Return = spPFW_Write_Data_Load_Message @.Load_Status, @.Load_PFW_Year, @.Load_PFW_Month, 'Load MSG0011B'
IF @.SP_Return <> 0 -- Write message failed
SET @.Load_Status = -11
END
END

IF @.Load_Status = 0
-- 12. Load Balance Sheet - Account & Company Level rows into tblPFW_Financial_Reporting_Data
BEGIN
EXECUTE @.SP_Return = spPFW_Insert_Financial_Reporting_Data 'Balance Sheet', 'All', @.syserror
IF @.SP_Return <> 0 -- Error occured while inserting Balance Sheet - Account or Company Level rows, execute rollback, log msg & set load status
BEGIN
ROLLBACK TRANSACTION
EXECUTE spPFW_Write_Data_Load_Message @.Load_Status, @.Load_PFW_Year, @.Load_PFW_Month, 'Load MSG011B' -- Rewrite previous msg dropped in Rollback
IF @.syserror <> 0 -- log the @.@.error and sysmessages description
EXECUTE spPFW_Write_Data_Load_Message @.syserror, @.Load_PFW_Year, @.Load_PFW_Month, 'sysmsg'
SET @.Load_Status = -12
EXECUTE spPFW_Write_Data_Load_Message @.Load_Status, @.Load_PFW_Year, @.Load_PFW_Month, 'Load MSG012A'
END
ELSE -- Balance Sheet - Account & Company rows were inserted successfully, log msg
BEGIN
EXECUTE @.SP_Return = spPFW_Write_Data_Load_Message @.Load_Status, @.Load_PFW_Year, @.Load_PFW_Month, 'Load MSG0012B'
IF @.SP_Return <> 0 -- Write message failed
SET @.Load_Status = -12
END
END

IF @.Load_Status = 0
-- 13. Load Income Statement - Account Level rows into tblPFW_Financial_Reporting_Data
BEGIN
EXECUTE @.SP_Return = spPFW_Insert_Financial_Reporting_Data 'Income Statement', 'Account', @.syserror
IF @.SP_Return <> 0 -- Error occurred inserting Income Statement - Account Level rows, execute rollback, log msg & set load status
BEGIN
ROLLBACK TRANSACTION
EXECUTE spPFW_Write_Data_Load_Message @.Load_Status, @.Load_PFW_Year, @.Load_PFW_Month, 'Load MSG012B' -- Rewrite previous msg dropped in Rollback
IF @.syserror <> 0 -- log the @.@.error and sysmessages description
EXECUTE spPFW_Write_Data_Load_Message @.syserror, @.Load_PFW_Year, @.Load_PFW_Month, 'sysmsg'
SET @.Load_Status = -13
EXECUTE spPFW_Write_Data_Load_Message @.Load_Status, @.Load_PFW_Year, @.Load_PFW_Month, 'Load MSG013A'
END
ELSE -- Income Statement - Account Level rows were inserted successfully, log msg
BEGIN
EXECUTE @.SP_Return = spPFW_Write_Data_Load_Message @.Load_Status, @.Load_PFW_Year, @.Load_PFW_Month, 'Load MSG0013B'
IF @.SP_Return <> 0 -- Write message failed
SET @.Load_Status = -13
END
END


IF @.Load_Status = 0
-- 14. Create & insert Income Statement - Company Level rows into tblPFW_Financial_Reporting_Data
BEGIN
EXECUTE @.SP_Return = spPFW_Insert_Financial_Reporting_Data 'Income Statement', 'Company', @.syserror
IF @.SP_Return <> 0 -- Error occurred inserting Income Statement - Company Level rows, execute rollback, log msg & set load status
BEGIN
ROLLBACK TRANSACTION
EXECUTE spPFW_Write_Data_Load_Message @.Load_Status, @.Load_PFW_Year, @.Load_PFW_Month, 'Load MSG013B' -- Rewrite previous msg dropped in Rollback
IF @.syserror <> 0 -- log the @.@.error and sysmessages description
EXECUTE spPFW_Write_Data_Load_Message @.syserror, @.Load_PFW_Year, @.Load_PFW_Month, 'sysmsg'
SET @.Load_Status = -14
EXECUTE spPFW_Write_Data_Load_Message @.Load_Status, @.Load_PFW_Year, @.Load_PFW_Month, 'Load MSG014A'
END
ELSE -- Income Statement - Company Level rows were inserted successfully, log msg
BEGIN
EXECUTE @.SP_Return = spPFW_Write_Data_Load_Message @.Load_Status, @.Load_PFW_Year, @.Load_PFW_Month, 'Load MSG0014B'
IF @.SP_Return <> 0 -- Write message failed
SET @.Load_Status = -14
END
END

IF @.Load_Status = 0
-- 15. Create & insert Income Statement - Consolidated Level rows into tblPFW_Financial_Reporting_Data
BEGIN
EXECUTE @.SP_Return = spPFW_Insert_Financial_Reporting_Data 'Income Statement', 'Consolidated', @.syserror
IF @.SP_Return <> 0 -- Error occurred inserting Income Statement Consolidated Level Data, execute rollback, log msg & set load status
BEGIN
ROLLBACK TRANSACTION
EXECUTE spPFW_Write_Data_Load_Message @.Load_Status, @.Load_PFW_Year, @.Load_PFW_Month, 'Load MSG014B' -- Rewrite Msg, dropped in Rollback
IF @.syserror <> 0 -- log the @.@.error and sysmessages description
EXECUTE spPFW_Write_Data_Load_Message @.syserror, @.Load_PFW_Year, @.Load_PFW_Month, 'sysmsg'
SET @.Load_Status = -15
EXECUTE spPFW_Write_Data_Load_Message @.Load_Status, @.Load_PFW_Year, @.Load_PFW_Month, 'Load MSG015A'
END
ELSE -- Income Statement - Consolidated Level rows were inserted successfully, log msg
BEGIN
EXECUTE @.SP_Return = spPFW_Write_Data_Load_Message @.Load_Status, @.Load_PFW_Year, @.Load_PFW_Month, 'Load MSG015B'
IF @.SP_Return <> 0 -- Write message failed
SET @.Load_Status = -15
END
END

IF @.Load_Status = 0
-- 16. Copy PFWDataLoad.txt file to Load History folder
BEGIN
SET @.MoveCmd = 'move L:\Home\PFW\Load\PFWDataLoad.txt L:\Home\PFW\Load_History\PFWDataLoad.' + cast(@.Load_PFW_Year as char(4)) + '.' + cast(@.Load_PFW_Month as varchar(2)) + '.txt'
EXEC @.SP_Return = master.dbo.xp_cmdshell @.MoveCmd
IF (@.SP_Return <> 0) -- Copy of PFWDataLoad.txt to Load History folder failed
BEGIN
SET @.Load_Status = -16
EXECUTE spPFW_Write_Data_Load_Message @.Load_Status, @.Load_PFW_Year, @.Load_PFW_Month, 'Load MSG016A'
END
ELSE -- Copy of PFWDataLoad.txt to Load History folder successful
EXECUTE spPFW_Write_Data_Load_Message @.Load_Status, @.Load_PFW_Year, @.Load_PFW_Month, 'Load MSG016B'
END

-- 17. Complete the Load Process

IF @.Load_Status = 0 -- Load Process completed successfully, log msg
EXECUTE spPFW_Write_Data_Load_Message @.Load_Status, @.Load_PFW_Year, @.Load_PFW_Month, 'Load MSG017B'
ELSE -- Error occurred in Load Process, log msg
EXECUTE spPFW_Write_Data_Load_Message @.Load_Status, @.Load_PFW_Year, @.Load_PFW_Month, 'Load MSG017A'

COMMIT -- Commit all inserts into tblPFW_Financial_Reporting_Data any remaining inserts into tblPFW_Data_Load_Message_Log

GO
SET QUOTED_IDENTIFIER OFF
GO
SET ANSI_NULLS ON
GO

|||Is there anyone that can assisted me|||

Have you configured as in this article?

http://msdn2.microsoft.com/en-us/library/6ksc0a82(vs.80).aspx

|||Thanks for getting back to me, yes the config has been set to allow for both the system and the user Greg|||

Arnie, This is the message that is entered into the event viewer after I run the debugger.

SQL Server when started as service must not log on as System Account. Reset to logon as user account using Control Panel.

I tried adding another account as an administrator, then changing my initial account (Greg) to limited, but that didn't work either

|||

It sounds like SQL Server is using the wrong account.

Perhaps these will help.


Configuration -Service Accounts, SQL Server 2005 - Setting Up Windows Service Accounts
http://support.microsoft.com/kb/283811/en-us
http://msdn2.microsoft.com/en-us/library/ms143691.aspx
http://msdn2.microsoft.com/en-us/library/ms143504.aspx