Showing posts with label certain. Show all posts
Showing posts with label certain. Show all posts

Thursday, March 8, 2012

change character in a string

i would like to know ... How to replace a certain character in a string not using a cursor ... or .. a mixed of substring an charindexes ...
i would like to know if there is any centence that do that
thanksTry REPLACE|||I fonud it

its REPLACE

sorry

bye bye

Saturday, February 25, 2012

Certain rows to excel files

Hello,

I have a table T1 with ID and Desc fields. I want to export this table to excel files based on the ID field. So my stored procedure select all the same T1.ID and export it to ID.xls files. How can I do this?

You can easily achieve this in DTS Import/Export Wizard. Suppose you're using SQL2000, open Enterprise Manager->go to the table from which you want to export data->right click choose All Tasks-> Export Data-> in the Specify Table Copy or Query step, choose Use a Query...->enter the SELECT command (e.g. SELECT * FROM Orders WHERE EmployeeID=6), or use Query Builder->complete the wizard.

|||

Thanks for the reply, How should I run the same queries for all the IDs separately in DTS and save the results to excel files?

certain rows to Excel files

Hello,

I have a table T1 with ID and Desc fields. I want to export this table to excel files based on the ID field. So my stored procedure select all the same T1.ID and export it to ID.xls files. How can I do this?

Hi,

While Using DTS Import/Export Wizard,
After Choosing Data Source and DataDestination,
Wizard asks to Specify Table Copy or Query,
At the Time Select the Radio Button [Use a Query to Specify the Data To Transfer].

It will Take you To Type Sql Statement where Query Statement Panel is there ,you can Type Query with your Condition.

|||

Thanks for the reply.

This should be done once every day , so I am trying to automate it. How should I develop my query and dts based on this criteria? Any example will be greatly appreciated.

|||

Select Jobs From SQL Server Agent Under Management of the Selected Server(In EnterpriseManager)

Select New Job and Name it
Select Steps Tab in New Job Properties
Click New step and Name Step and Write Query in Command Panel
Click New Schedule and Name Schedule and Select Schedule Type

Now you Achieve the Required Thing

|||

I still do not see how the query that I am writing will export data to a different excel file based on the ID. All the rows that have the same ID should go to a single excel file, so I am expecting more than one excel file based on the distinct values of ID to be created. How can I do this in DTS?

Certain rows to excel files

Hello,
I have a table T1 with ID and Desc fields. I want to export this table to
excel files based on the ID field. So my stored procedure select all the sam
e
T1.ID and export it to ID.xls files. How can I do this?Jim,
is this a one-off? If so I'd use the import/export wizard. If it is a
regular process, I'd look at using a scheduled DTS package (you can use the
import/export one as a template for this). There are other options namely
BCP and OSQL which run from the command-line which might also be relevant
for you.
Cheers,
Paul Ibison SQL Server MVP, www.replicationanswers.com|||Hi Paul,
This should be done once every day , so I am trying to automate it. How
should I develop my query and dts based on this criteria? Any example will b
e
greatly appreciated.
"Paul Ibison" wrote:

> Jim,
> is this a one-off? If so I'd use the import/export wizard. If it is a
> regular process, I'd look at using a scheduled DTS package (you can use th
e
> import/export one as a template for this). There are other options namely
> BCP and OSQL which run from the command-line which might also be relevant
> for you.
> Cheers,
> Paul Ibison SQL Server MVP, www.replicationanswers.com
>
>|||Jim,
is the ID list hardcoded or do new IDs appear? If it is hardcoded, then
you'll have a single 'Transform data' task for each ID and the destination
will be also hardcoded in this case.
If this is not the case, then I'd have a lookup table which contains the
ID/path as a pair of columns. You'll need to then iterate through that
table. Each iteration could call the package, sending in the 2 details as
global variables. These global variables will be used to modify the package
to run with the correct ID values. This is done through the Dynamic
Properties task, which would be mapped to the excel file path and the
parameter used for the stored proc.
I realize this sounds complicated, but it's actually quite easy to set up.
The main issue is whether the ID's are known in advance or not. If they are,
then it is quite simple.
Cheers,
Paul Ibison SQL Server MVP, www.replicationanswers.com|||This table has IDs and Desc already in it. IDs might be duplicated and I am
trying to export the rows with the same IDs to a single excel file, so
expecting many excel files based on the distinct values of IDs. DTS seems
quite complicated to me since I have not done anything with that yet.
"Paul Ibison" wrote:

> Jim,
> is the ID list hardcoded or do new IDs appear? If it is hardcoded, then
> you'll have a single 'Transform data' task for each ID and the destination
> will be also hardcoded in this case.
> If this is not the case, then I'd have a lookup table which contains the
> ID/path as a pair of columns. You'll need to then iterate through that
> table. Each iteration could call the package, sending in the 2 details as
> global variables. These global variables will be used to modify the packag
e
> to run with the correct ID values. This is done through the Dynamic
> Properties task, which would be mapped to the excel file path and the
> parameter used for the stored proc.
> I realize this sounds complicated, but it's actually quite easy to set up.
> The main issue is whether the ID's are known in advance or not. If they ar
e,
> then it is quite simple.
> Cheers,
> Paul Ibison SQL Server MVP, www.replicationanswers.com
>
>
>|||JIM.H. wrote:
> This table has IDs and Desc already in it. IDs might be duplicated and I a
m
> trying to export the rows with the same IDs to a single excel file, so
> expecting many excel files based on the distinct values of IDs. DTS seems
> quite complicated to me since I have not done anything with that yet.
>
Jim, here is a link to a message thread that describes how to use a
template XLS file, some VBScript, and OPENROWSET to create a new Excel
document and export data to it, without using DTS. Might prove useful
to you:
http://www.sqlteam.com/forums/topic...926&whichpage=6
Tracy McKibben
MCDBA
http://www.realsqlguy.com|||Jim,
if you know all the IDs in advance, then you could hardcode these into the
DTS package as data flow tasks and this would be surprisingly easy (use the
export wizard to create the main template to see how it works), or if you
are more familiar with BCP, you could use a separate bcp commandline for
each excel file.
Cheers,
Paul Ibison SQL Server MVP, www.replicationanswers.com|||Great. Thanks Tracy.
"Tracy McKibben" wrote:

> JIM.H. wrote:
> Jim, here is a link to a message thread that describes how to use a
> template XLS file, some VBScript, and OPENROWSET to create a new Excel
> document and export data to it, without using DTS. Might prove useful
> to you:
> http://www.sqlteam.com/forums/topic...926&whichpage=6
>
> --
> Tracy McKibben
> MCDBA
> http://www.realsqlguy.com
>

Certain rows to excel files

Hello,
I have a table T1 with ID and Desc fields. I want to export this table to
excel files based on the ID field. So my stored procedure select all the same
T1.ID and export it to ID.xls files. How can I do this?Jim,
is this a one-off? If so I'd use the import/export wizard. If it is a
regular process, I'd look at using a scheduled DTS package (you can use the
import/export one as a template for this). There are other options namely
BCP and OSQL which run from the command-line which might also be relevant
for you.
Cheers,
Paul Ibison SQL Server MVP, www.replicationanswers.com|||Hi Paul,
This should be done once every day , so I am trying to automate it. How
should I develop my query and dts based on this criteria? Any example will be
greatly appreciated.
"Paul Ibison" wrote:
> Jim,
> is this a one-off? If so I'd use the import/export wizard. If it is a
> regular process, I'd look at using a scheduled DTS package (you can use the
> import/export one as a template for this). There are other options namely
> BCP and OSQL which run from the command-line which might also be relevant
> for you.
> Cheers,
> Paul Ibison SQL Server MVP, www.replicationanswers.com
>
>|||Jim,
is the ID list hardcoded or do new IDs appear? If it is hardcoded, then
you'll have a single 'Transform data' task for each ID and the destination
will be also hardcoded in this case.
If this is not the case, then I'd have a lookup table which contains the
ID/path as a pair of columns. You'll need to then iterate through that
table. Each iteration could call the package, sending in the 2 details as
global variables. These global variables will be used to modify the package
to run with the correct ID values. This is done through the Dynamic
Properties task, which would be mapped to the excel file path and the
parameter used for the stored proc.
I realize this sounds complicated, but it's actually quite easy to set up.
The main issue is whether the ID's are known in advance or not. If they are,
then it is quite simple.
Cheers,
Paul Ibison SQL Server MVP, www.replicationanswers.com|||This table has IDs and Desc already in it. IDs might be duplicated and I am
trying to export the rows with the same IDs to a single excel file, so
expecting many excel files based on the distinct values of IDs. DTS seems
quite complicated to me since I have not done anything with that yet.
"Paul Ibison" wrote:
> Jim,
> is the ID list hardcoded or do new IDs appear? If it is hardcoded, then
> you'll have a single 'Transform data' task for each ID and the destination
> will be also hardcoded in this case.
> If this is not the case, then I'd have a lookup table which contains the
> ID/path as a pair of columns. You'll need to then iterate through that
> table. Each iteration could call the package, sending in the 2 details as
> global variables. These global variables will be used to modify the package
> to run with the correct ID values. This is done through the Dynamic
> Properties task, which would be mapped to the excel file path and the
> parameter used for the stored proc.
> I realize this sounds complicated, but it's actually quite easy to set up.
> The main issue is whether the ID's are known in advance or not. If they are,
> then it is quite simple.
> Cheers,
> Paul Ibison SQL Server MVP, www.replicationanswers.com
>
>
>|||JIM.H. wrote:
> This table has IDs and Desc already in it. IDs might be duplicated and I am
> trying to export the rows with the same IDs to a single excel file, so
> expecting many excel files based on the distinct values of IDs. DTS seems
> quite complicated to me since I have not done anything with that yet.
>
Jim, here is a link to a message thread that describes how to use a
template XLS file, some VBScript, and OPENROWSET to create a new Excel
document and export data to it, without using DTS. Might prove useful
to you:
http://www.sqlteam.com/forums/topic.asp?TOPIC_ID=49926&whichpage=6
Tracy McKibben
MCDBA
http://www.realsqlguy.com|||Jim,
if you know all the IDs in advance, then you could hardcode these into the
DTS package as data flow tasks and this would be surprisingly easy (use the
export wizard to create the main template to see how it works), or if you
are more familiar with BCP, you could use a separate bcp commandline for
each excel file.
Cheers,
Paul Ibison SQL Server MVP, www.replicationanswers.com|||Great. Thanks Tracy.
"Tracy McKibben" wrote:
> JIM.H. wrote:
> > This table has IDs and Desc already in it. IDs might be duplicated and I am
> > trying to export the rows with the same IDs to a single excel file, so
> > expecting many excel files based on the distinct values of IDs. DTS seems
> > quite complicated to me since I have not done anything with that yet.
> >
> Jim, here is a link to a message thread that describes how to use a
> template XLS file, some VBScript, and OPENROWSET to create a new Excel
> document and export data to it, without using DTS. Might prove useful
> to you:
> http://www.sqlteam.com/forums/topic.asp?TOPIC_ID=49926&whichpage=6
>
> --
> Tracy McKibben
> MCDBA
> http://www.realsqlguy.com
>

Certain records in a table cannot be edited, but others are OK

Hi,
I am using SQL Server 2000 as part of SBS2003.
I have a table containing approx. 25,000 records. It's a fairly simple
table with just 30 fields. There are no triggers on the table.
The table has existed in SQL for at least 12 months and has worked
fine. However, today I have a problem. When I try to edit certain
records in the table, the whole SQL system hangs until I eventually
get an ODBC time out error. Then the system works again, but I cannot
edit these records.
Strangely, the majority of the other records are fine and I can edit
them. This includes records which come before and after the "bad"
records. I have compared two records which fail, with two which are
ok, but I can't see any noticable difference, at least nothing which
might cause this problem.
Anybody got any ideas what might be happening? Presumably some kind of
lock on the records. If so, how do I identify which records are
effected and how do I clear the locks?
Thanks for any help,
ColinBobby
Don't edit the data throu EM .
Do you succed to run UPDATE .... from QA?
"Bobby" <bobby2@.blueyonder.co.uk> wrote in message
news:1191928927.027341.243860@.v3g2000hsg.googlegroups.com...
> Hi,
> I am using SQL Server 2000 as part of SBS2003.
> I have a table containing approx. 25,000 records. It's a fairly simple
> table with just 30 fields. There are no triggers on the table.
> The table has existed in SQL for at least 12 months and has worked
> fine. However, today I have a problem. When I try to edit certain
> records in the table, the whole SQL system hangs until I eventually
> get an ODBC time out error. Then the system works again, but I cannot
> edit these records.
> Strangely, the majority of the other records are fine and I can edit
> them. This includes records which come before and after the "bad"
> records. I have compared two records which fail, with two which are
> ok, but I can't see any noticable difference, at least nothing which
> might cause this problem.
> Anybody got any ideas what might be happening? Presumably some kind of
> lock on the records. If so, how do I identify which records are
> effected and how do I clear the locks?
> Thanks for any help,
> Colin
>|||On 9 Oct, 13:08, "Uri Dimant" <u...@.iscar.co.il> wrote:
> Bobby
> Don't edit the data throu EM .
> Do you succed to run UPDATE .... from QA?
>
No, it won't edit from anywhere, including my application.
A bit more information which I have just discovered. On Monday morning
I had to restore my database to dinner time Friday due to a problem we
had. It appears that all of the records which are "bad" are in the
range which were originally entered on Friday afternoon.
Is this possible. Does this indicate an indexing error. If so, how do
I refresh my indexes?
Thanks
Colin|||Bobby
Howe many rows are affrectyed from your UPDATE? rUN DBCC CHECKDB to see what
is going on?
"Bobby" <bobby2@.blueyonder.co.uk> wrote in message
news:1191933223.214949.93940@.g4g2000hsf.googlegroups.com...
> On 9 Oct, 13:08, "Uri Dimant" <u...@.iscar.co.il> wrote:
>> Bobby
>> Don't edit the data throu EM .
>> Do you succed to run UPDATE .... from QA?
> No, it won't edit from anywhere, including my application.
> A bit more information which I have just discovered. On Monday morning
> I had to restore my database to dinner time Friday due to a problem we
> had. It appears that all of the records which are "bad" are in the
> range which were originally entered on Friday afternoon.
> Is this possible. Does this indicate an indexing error. If so, how do
> I refresh my indexes?
> Thanks
> Colin
>|||On 9 Oct, 13:49, "Uri Dimant" <u...@.iscar.co.il> wrote:
> Bobby
> Howe many rows are affrectyed from your UPDATE? rUN DBCC CHECKDB to see what
> is going on?
> "Bobby" <bob...@.blueyonder.co.uk> wrote in message
> news:1191933223.214949.93940@.g4g2000hsf.googlegroups.com...
>
> > On 9 Oct, 13:08, "Uri Dimant" <u...@.iscar.co.il> wrote:
> >> Bobby
> >> Don't edit the data throu EM .
> >> Do you succed to run UPDATE .... from QA?
> > No, it won't edit from anywhere, including my application.
> > A bit more information which I have just discovered. On Monday morning
> > I had to restore my database to dinner time Friday due to a problem we
> > had. It appears that all of the records which are "bad" are in the
> > range which were originally entered on Friday afternoon.
> > Is this possible. Does this indicate an indexing error. If so, how do
> > I refresh my indexes?
> > Thanks
> > Colin- Hide quoted text -
> - Show quoted text -
15 rows are effected. DBCC CHECKDB shows no errors.|||Bobby
Ok , so when you issue an UDDATE against a "bad" row on TEST server (you
said that you restotred the db) , what error do you get?
"Bobby" <bobby2@.blueyonder.co.uk> wrote in message
news:1191936289.762941.280800@.22g2000hsm.googlegroups.com...
> On 9 Oct, 13:49, "Uri Dimant" <u...@.iscar.co.il> wrote:
>> Bobby
>> Howe many rows are affrectyed from your UPDATE? rUN DBCC CHECKDB to see
>> what
>> is going on?
>> "Bobby" <bob...@.blueyonder.co.uk> wrote in message
>> news:1191933223.214949.93940@.g4g2000hsf.googlegroups.com...
>>
>> > On 9 Oct, 13:08, "Uri Dimant" <u...@.iscar.co.il> wrote:
>> >> Bobby
>> >> Don't edit the data throu EM .
>> >> Do you succed to run UPDATE .... from QA?
>> > No, it won't edit from anywhere, including my application.
>> > A bit more information which I have just discovered. On Monday morning
>> > I had to restore my database to dinner time Friday due to a problem we
>> > had. It appears that all of the records which are "bad" are in the
>> > range which were originally entered on Friday afternoon.
>> > Is this possible. Does this indicate an indexing error. If so, how do
>> > I refresh my indexes?
>> > Thanks
>> > Colin- Hide quoted text -
>> - Show quoted text -
> 15 rows are effected. DBCC CHECKDB shows no errors.
>|||On 9 Oct, 14:27, "Uri Dimant" <u...@.iscar.co.il> wrote:
> Bobby
> Ok , so when you issue an UDDATE against a "bad" row on TEST server (you
> said that you restotred the db) , what error do you get?
>
The error doesn't occur on the test database, only on "live". If I run
the update on live I get,
[Microsoft][ODBC SQL Server Driver]Timeout expired
If I turn off the timeout it hangs indefinately. While it is hung,
nobody can do anything with the table. This only happens for the 15
"bad" records. All of the rest are fine.
Thanks for your help|||What happens if you create a duplicate table (different name) and insert all
rows into that table?
"Bobby" <bobby2@.blueyonder.co.uk> wrote in message
news:1191938900.611210.83780@.r29g2000hsg.googlegroups.com...
> On 9 Oct, 14:27, "Uri Dimant" <u...@.iscar.co.il> wrote:
>> Bobby
>> Ok , so when you issue an UDDATE against a "bad" row on TEST server (you
>> said that you restotred the db) , what error do you get?
> The error doesn't occur on the test database, only on "live". If I run
> the update on live I get,
> [Microsoft][ODBC SQL Server Driver]Timeout expired
> If I turn off the timeout it hangs indefinately. While it is hung,
> nobody can do anything with the table. This only happens for the 15
> "bad" records. All of the rest are fine.
> Thanks for your help
>
>|||On 9 Oct, 15:25, "Jay" <s...@.nospam.org> wrote:
> What happens if you create a duplicate table (different name) and insert all
> rows into that table?
> "Bobby" <bob...@.blueyonder.co.uk> wrote in message
> news:1191938900.611210.83780@.r29g2000hsg.googlegroups.com...
>
> > On 9 Oct, 14:27, "Uri Dimant" <u...@.iscar.co.il> wrote:
> >> Bobby
> >> Ok , so when you issue an UDDATE against a "bad" row on TEST server (you
> >> said that you restotred the db) , what error do you get?
> > The error doesn't occur on the test database, only on "live". If I run
> > the update on live I get,
> > [Microsoft][ODBC SQL Server Driver]Timeout expired
> > If I turn off the timeout it hangs indefinately. While it is hung,
> > nobody can do anything with the table. This only happens for the 15
> > "bad" records. All of the rest are fine.
> > Thanks for your help- Hide quoted text -
> - Show quoted text -|||On 9 Oct, 15:25, "Jay" <s...@.nospam.org> wrote:
> What happens if you create a duplicate table (different name) and insert all
> rows into that table?
>
That seems to have fixed the problem. Only trouble is I can't do it
for real right now due to users on the system. I'll have to wait until
tonight, and login remotely.
Thanks|||Glad it helped.
This goes back to a very old DBA joke:
What part of "dump and reload" don't you understand?
"Bobby" <bobby2@.blueyonder.co.uk> wrote in message
news:1191941917.222229.228010@.d55g2000hsg.googlegroups.com...
> On 9 Oct, 15:25, "Jay" <s...@.nospam.org> wrote:
>> What happens if you create a duplicate table (different name) and insert
>> all
>> rows into that table?
> That seems to have fixed the problem. Only trouble is I can't do it
> for real right now due to users on the system. I'll have to wait until
> tonight, and login remotely.
> Thanks
>|||On 9 Oct, 15:58, Bobby <bob...@.blueyonder.co.uk> wrote:
> On 9 Oct, 15:25, "Jay" <s...@.nospam.org> wrote:
> > What happens if you create a duplicate table (different name) and insert all
> > rows into that table?
> That seems to have fixed the problem. Only trouble is I can't do it
> for real right now due to users on the system. I'll have to wait until
> tonight, and login remotely.
> Thanks
Unfortunately the problem is not resolved. Last night (with nobody
else on the system) I exported my table, deleted the original, created
a new table with the same name and fields as the original and used SQL
Analyser to copy the data from the export to the new table.
Everything was fine, I could modify the "bad" records no problem. This
morning I have just arrived in. Same as yesterday, there are 15
records which I cannot modify. Every other record in the table is ok.
If I look at the data, I can see the changes that I made last night,
but I can no longer modify them.
I'm not sure what to do now, apart from try to forget that the bad
records exist. Anybody got any better ideas?
Thanks for your help,
Confused
Colin

Certain numeric fields not read from the Excel file when using a Excel file source.

I have the Excel Connection Manager and Source to read the contents from an Excel file. For some reason couple of numeric fields from the Excel worksheet are brought over as nulls even though they have a value of 300 and 150. I am not sure why this is happening. I looked into the format of the fields and they are set to General in Excel, I tried setting them to numeric and that did not help.

All the other content from the excel file is coming thru except for the 2 numeric fields.

I tried to bring the contents from the excel source to a text file in csv format and for some reason the 2 numeric fields came out as blank.

Any inputs on getting this addressed will be much appreciated.

Thanks,

Manisha

Try doing a data conversion of this column in the transformation phase using the Derived Column Transformation or the Data Conversion Transformation. Hope that works for you.|||

It has to do with a registry setting. I got a response from some other forum and I tried it and it worked.

|||I am having a similar issue. I am importing an Excel sheet from a customer of ours into a SQL Server 2005 table. The first column is text and imports OK. The 2nd column has numbers, but the first two rows import as nulls. If I open the sheet and retype the numbers exactly as they appear, then the sheet imports OK. Each time they send us this sheet I have to retype the numbers in the first two rows. If I import the sheet into Access (using the Access Import wizard), Access does not have a problem and the first two numbers show up OK in Access (without my having to retype them first). How were you able to resolve this issue? Thanks.|||

Al C. wrote:

I am having a similar issue. I am importing an Excel sheet from a customer of ours into a SQL Server 2005 table. The first column is text and imports OK. The 2nd column has numbers, but the first two rows import as nulls. If I open the sheet and retype the numbers exactly as they appear, then the sheet imports OK. Each time they send us this sheet I have to retype the numbers in the first two rows. If I import the sheet into Access (using the Access Import wizard), Access does not have a problem and the first two numbers show up OK in Access (without my having to retype them first). How were you able to resolve this issue? Thanks.

Can you set up the Excel connector to import all fields as text fields, and then later convert them to numerics using a derived column transformation?|||

Phil,

TFYR. I verified in the Excel Source (Advanced Editor) that it is using DT_WSTR in both the External columns and Output columns. When I click the Preview button it shows NULL in the first two rows. If I add a DataViewer it also shows nulls. Evidently the first two rows are considered text even though the values are numeric. I can see the difference when I use the formula auditing tool and the first two rows align to the left (indicating text) and the others align to the right (numeric). So what I have gathered is that Excel samples the first nn rows to determine the datatype and since most values are numeric it deems the datatype of the column to be numeric and so it replaces the first two values with nulls. If I modify the Excel connection string and add 'IMEX=1' (import mode=1) to the extended properties, then it works OK.

Certain numeric fields not read from the Excel file when using a Excel file source.

I have the Excel Connection Manager and Source to read the contents from an Excel file. For some reason couple of numeric fields from the Excel worksheet are brought over as nulls even though they have a value of 300 and 150. I am not sure why this is happening. I looked into the format of the fields and they are set to General in Excel, I tried setting them to numeric and that did not help.

All the other content from the excel file is coming thru except for the 2 numeric fields.

I tried to bring the contents from the excel source to a text file in csv format and for some reason the 2 numeric fields came out as blank.

Any inputs on getting this addressed will be much appreciated.

Thanks,

Manisha

Try doing a data conversion of this column in the transformation phase using the Derived Column Transformation or the Data Conversion Transformation. Hope that works for you.|||

It has to do with a registry setting. I got a response from some other forum and I tried it and it worked.

|||I am having a similar issue. I am importing an Excel sheet from a customer of ours into a SQL Server 2005 table. The first column is text and imports OK. The 2nd column has numbers, but the first two rows import as nulls. If I open the sheet and retype the numbers exactly as they appear, then the sheet imports OK. Each time they send us this sheet I have to retype the numbers in the first two rows. If I import the sheet into Access (using the Access Import wizard), Access does not have a problem and the first two numbers show up OK in Access (without my having to retype them first). How were you able to resolve this issue? Thanks.|||

Al C. wrote:

I am having a similar issue. I am importing an Excel sheet from a customer of ours into a SQL Server 2005 table. The first column is text and imports OK. The 2nd column has numbers, but the first two rows import as nulls. If I open the sheet and retype the numbers exactly as they appear, then the sheet imports OK. Each time they send us this sheet I have to retype the numbers in the first two rows. If I import the sheet into Access (using the Access Import wizard), Access does not have a problem and the first two numbers show up OK in Access (without my having to retype them first). How were you able to resolve this issue? Thanks.

Can you set up the Excel connector to import all fields as text fields, and then later convert them to numerics using a derived column transformation?|||

Phil,

TFYR. I verified in the Excel Source (Advanced Editor) that it is using DT_WSTR in both the External columns and Output columns. When I click the Preview button it shows NULL in the first two rows. If I add a DataViewer it also shows nulls. Evidently the first two rows are considered text even though the values are numeric. I can see the difference when I use the formula auditing tool and the first two rows align to the left (indicating text) and the others align to the right (numeric). So what I have gathered is that Excel samples the first nn rows to determine the datatype and since most values are numeric it deems the datatype of the column to be numeric and so it replaces the first two values with nulls. If I modify the Excel connection string and add 'IMEX=1' (import mode=1) to the extended properties, then it works OK.