Saturday, July 12, 2014

When should I use a function?

I was asked recently by a colleague: When should I use a function? I wanted to refresh my memory on the subject and give a high level, but informative, response. Here goes...

Functions can allow you to encapsulate code and reuse complex statements. UDFs (user-defined functions) in SQL Server consist of scalar (return a single value) and table valued functions. You can think of user defined scalar functions just like system built-in functions (SUBSTRING, GETDATE, CHARINDEX) where they will return one value and, when applied to multiple rows, SQL Server will execute the function once for every row in the result set.

Table valued functions can be similar, in functionality (see what I did there?), to a stored procedure. Rather than executing the stored procedure to get a result, you can select from them, join them to other tables, and generally use them anywhere you would use a table. Awesome, right?

Don’t go changing all your Stored Procs to TVFs just yet...

If the TVF contains a single statement (called inline) then the optimizer will treat it similar to a view, in that it will reference the underlying objects in the execution plan. If the TVF contains multiple statements, the optimizer does not reference the underlying objects, and it treats the statements similar to a table variable, in that it cannot retrieve statistics and guesses the resulting row count of the TVF to be 1 (in SQL Server 2014 the new cardinality estimator increased the estimated row count to 100), which can cause huge performance issues.

The best (and only, IMO) time to use TVF would be creating an inline TVF which is normally a single statement, but can be a complex single statement incorporating CTE’s. A multi-statement TVF would only be useful and performant if it is always expected to return a few rows (or in 2014, right around 100 rows) as the optimizer will make this assumption at compile time.

One item to take note of with even better performing inline TVFs, the outer filter is applied after the TVF is executed. Ingest that statement for a minute. If you are calling the function on a join and filtering the original query significantly, this filter will be applied after you retrieve all the values from the TVF, which can obviously be a waste of valuable resources and time.

I believe it is best to think of inline TVF as a filtered view and, like all other things, test for optimizations and speed, remembering to use SET STATISTICS TIME ON and SET STATISTICS IO ON as execution plans often times treat multi-valued and scalar functions like black boxes and won’t show the execution counts on underlying objects, as well as give you very misleading information. Gail Shaw’s brief discussion goes into more detail: http://sqlinthewild.co.za/index.php/2009/04/29/functions-io-statistics-and-the-execution-plan/

The article below is a deeper dive, and the best article, I found on SQL functions. Jeremiah also provides excellent examples of turning non-performant functions into optimized in-line functions and cross apply statements. https://www.simple-talk.com/sql/t-sql-programming/sql-server-functions-the-basics/

Saturday, May 17, 2014

Back to Basics TSQL: Declaring multiple variables in one line versus individual lines.

I acknowledge that declaring multiple variables in one statement saves typing lines of code, and could save you a small (very small) amount of space in the database (if you add up thousands of stored procedures declaring variables in one statement versus on multiple lines).

I know I go against the grain for code style preferences compared to developers I've come across, as I prefer declaring variables on their own lines. The main reason is so I don't have to search through, what can be, a long list of variables to find something. Preference. The End. 

I did want to determine how SQL Server processes the differences, internally. If there was a valid performance boost to declaring these in one line, then I could be swayed to change my preference. 

I ran the following statements on my local version of SQL Sever 2014 (Developer) and watched SQL Profiler for details of "behind the scenes":

Extremely simple example:
Statement 1: Statement 1 Profiler Output:
Declaring multiple variables all on one line.







Statement 2: Statement 2 Profiler Output:
Declaring one variable per line.


So, at least from my 5 minutes of testing, it appears as if SQL Server deems both statements as one batch.

Which coding practice should you follow? Whichever one feels right to you, because as far as performance, SQL seems to not treat them differently.

Monday, March 3, 2014

SSAS - When processing cube(s), receive "Login failed for user 'DOMAIN\COMPUTERNAME$'"

Pretty simple, but writing this up because I'm diving back into SSAS for some performance testing. This information is for SQL Server 2012, but will be similar for previous versions.

You've created a new SQL Server database where your cubes will pull their data from, as well as a new SSAS database. When you create a datasource, the impersonation information tab is set to (default) "Use the service account". Normally I would create an network user to do this, but installing it locally and running tests, I went with the standard impersonation configuration.

Open your computer services to determine the name of the service account. I do this by right clicking on My Computer -> Manage, and expanding Services. For SQL Server 2012, look for this entry: 

SQL Server Analysis Services (if you have multiple versions installed, choose the version you are using, in my case 2012. I can see the Log On As is set to: NT Service\MSOLAP$SQL2012

Go to SSMS and expand the security tab, if you do not see the above login, create a new one and make sure they are mapped to the database where the data is being pulled into your cube: the one you created for the data source. Make sure the user has read access to the database by adding db_datareader role. Hit Ok to save.

Try to process your cube, you should be golden now.

Thursday, December 19, 2013

Unofficial benchmark results comparing SSIS to Talend

As a follow up to my previous post, here is the output of my various tests comparing the extraction only processing rates between SSIS and talend.

Flat to Flat
I created a flat file (.csv) with 10M rows with the following fields (sizes to indicate approx size of expected data)
Id - Bigint
OtherId - Int
SomeOtherNumber - Int
RandomName - VarChar(250)
AnotherNumber - Int
BitField - true or false
EmailAddress - VarChar(250)

Three runs to verify consistency were done, results are below:

SSIS: Average duration 44.5 seconds
Talend: Average duration 1 minute 9 seconds

Large Flat File to SQL Server
Same flat file used above, but loaded into SQL Server 2012 located on my laptop. So Flat File -> SQL Server 2012, with no transformations just straight data columns to data fields. Again, three runs to verify consistency, results are as follows:

SSIS: Average duration 59.6 seconds
Talend: Average duration 6 minutes 56 seconds
(nope, that's not a typo, that's nearly 7 minutes)

Many Files to SQL Server
Next I attempted ingesting from a folder containing 3 subfolders containing 79 files, which contained a total of 2,187,842 records. Three runs, results are below:

SSIS: Average duration 22.8 seconds
Talend: Average duration 42.2 seconds

Summary
This is a very unofficial benchmark testing in trying to determine if we should start focusing on Talend as an ETL tool over SSIS. As Talend boasts over 450+ connectors out of the box and is platform independent, it was worth a look. I did not expect for SSIS to outperform in all scenarios. I expected better integration and speed with MSFT products (Flat File to SQL Server), which is what SSIS was designed for and which performance meets my expectations. 

This is not an indication that Talend is or may be a poor solution, but in our environment, SSIS is the clear winner so far.
Notes:
To be completely fair, and if I'd had time, I would have installed Talend on linux and run similar tests, but we are a mostly MSFT shop so I went with what we have access to, locally.

Hardware/Software configurations:
Dell Precision M4700, 32GB Ram, Win7 64-bit
[Flat file storage, Talend and SSIS installed on laptop]
Virtual Server with 32GB Ram, Server 2012 64-bit
[SQL Server 2012]
Talend Open Studio - Data Integration v 5.4.1
I'm being pretty vague, as the title states: unofficial benchmark.

I attempted to ONLY have minimal processes running on both, etc. I also ran tests without connecting to SQL Server, i.e., local flat file to flat file ETL, small files, large files, etc., trying to vary the scenarios we may have at work. 

Friday, December 6, 2013

SSIS Package to extract data from Hortonworks to SQL Server

I am in the midst of comparing a few different architectural scenarios, one if which led me to test out the functionality of extracting data from Hortonworks (running on a local VirutalBox) using an SSIS Package and the Hive ODBC Driver 1.2. 

 To get you up to speed on high level architecture options I am considering, here is an overview:


The goal is to be able to ingest nearly any data source, join it with internal metadata, aggregate and expose it to our users via our application (web based) and/or export slices back into any format customers may require. To determine the best ETL solution for our needs, we are comparing talend and SQL Server SSIS. This was after some thorough research into other solutions as well, but for our particular needs, these two options seemed viable.

I like the ability of the talend Data Integration tool that comes with connectors to over 400 different data types, but to use it in an enterprise setting, with shared source control, we'd need to license it correctly which becomes costly as it's on a per-user subscription basis. The other alternative is to utilize tools that we are already paying for with our SQL Server licensing, Integration Services which I've been using since SQL Server 2005 (and before 2005 it was DTS).

If you haven't already played with Hortonworks (which can be done on Windows if you need, easiest way is to download their Sandbox on VirtualBox), I thoroughly encourage you to do so. Their tutorials are extremely easy to follow. Technical disclosure: all of these pieces I'm testing initially are running on my laptop on Windows 7 Pro (64-bit) with 32GB RAM. I set the VirtualBox to use 8GB RAM. The goal would be to initially test the functionality and then create a working prototype to benchmark and fully test to make a final decision.

One of the tutorials resulted in installing the Hortonworks ODBC 1.2 Driver (I installed the 32-bit for this test) to pull data into Excel. I then uploaded a test file into Hortonworks, HCatalog which consisted of a 16 column, approx. 48K row .csv file for my initial dataset.

I configured SSIS with the ODBC connection previously created, and an OLE DB Connection to a local SQL Server 2012 database installation. Because I installed the 32-bit ODBC driver, I needed to update the debug configuration and set the Run64BitRuntime to False:


SSIS Data Flow task successfully ran, fairly quickly considering all the objects are locally on the same machine which is far from an optimal solution:



Next I'll be attempting to do a similar test in talend. I'll post the results of that one soon. 

Sunday, July 7, 2013

Columnstore Indexes: the best thing since cake (and I love cake!)

Most times with software upgrades comes a few "oh, that's great I think I may be able to utilize that" going through my mind. When I read and dove deeper into SQL Server 2012's new feature, Columnstore Indexes, my heart actually started to race. It takes a true love of data warehouses to feel this way, but it's as if SQL Server answered some of my questions and frustrations over the past few years. These frustrations grew around dealing with TBs worth of data and how normal row indexes (normal custered and non-clustered indexes are actual row indexes) just were not optimized for VLDB optimized querying. 

From msdn online, here is a summary of the difference: 

"An xVelocity memory optimized columnstore index, groups and stores data for each column and then joins all the columns to complete the whole index. This differs from traditional indexes which group and store data for each row and then join all the rows to complete the whole index."

The biggest difference is that data is grouped and stored one column at a time. The benefits are: because only the columns needed are read, this results in less disk reads, better compression, improved buffer pool usage (reducing I/O), and utilizes batch processing which reduces CPU.

Wah? Yes! Wah? I'll say it again, YES! And yes, it works with table partitions. Of course, there are logical limitations to it, like the most important being the table must be Read Only. But, for normal data warehouse architecture, data is not transactional and is loaded at certain times, most normally once or maybe a few times a day. So dropping and recreating indexes are something most data warehouse engineers are familiar with, in detail. Or you can always implement partitions and then switch, as well as other options to get around the read only requirement. There are other limitations, which I encourage you to explore, as well as some gotchas.

A fantastic, and thorough walk through of this new, exciting, feature in SQL Server 2012 can be found here: What's new in SQL Server 2012: Using the new Columnstore Index, it's a YouTube video by Kevin S. Goff.

My hat's off to the SQL Server development team, warms my heart that you addressed the growing needs of us data warehouse minions.

Friday, June 21, 2013

SharePoint 2013 - allow others to edit a discussion (not just reply)

One of many SharePoint tips...

Some sites may want more collaboration, the ability to edit discussion topics, etc. I would highly recommend turning on version control for the list you modify in order to see who changed what. Here are the steps:

You need to be in the Site Owner group to make these changes. If you don't have the appropriate permissions, contact your SharePoint administrator to  assist.


Go to Site Contents, click on the discussion list name and in the ribbon click on List, then List Settings.

Once in List Settings, click on Versioning Settings and make sure Create a version each time you edit is set to Yes. I typically do not require content approval for submitted items, but this is an option. It would not allow anything to show until one member of the Approval Members group approves the content. 

Once you click "OK" on the Versioning Settings screen, go into Advanced Settings.


Under the Create and Edit access, select the "Create and edit all items" instead of the "...created by the user" like so, then be sure to click the OK button.


Next, we're going to alter the Site Members contribute rights. Click on the Gear in the upper right hand corner and select Site Settings, then under Users and Permissions, select Site permissions. Check the box next to the [Site Name] Members name, and the "Edit User Permissions" option on the ribbon will become available.


Click the Edit User Permissions, the default rights should be set to only Contribute. Check the boxes next to Edit and Approve like so:

Then click the "OK" button. Now have a site member test the ability to "Edit" a discussion but finding a discussion topic, clicking the ellipses and seeing the "Edit" ability like so:


Hope this helps. 

Tuesday, June 18, 2013

SharePoint 2013 - How to make the discussion edit window wider without altering the Master Pages.

One of many SharePoint tips...

When creating or editing a discussion, the text field for the body of the discussion is way too narrow. Can it be changed?

There are a few ways, the easiest is to implement JQuery in a web part on the page, that way no-one is mucking with the templates in SharePoint - which can be overwritten in patches, etc.

This following way would need to be done at each site or sub-site as the editform.aspx is unique (copied from the site collection level once a site is created). Which means implementing this at one site will not affect this form on any other site.

Note: You need design rights or site ownership rights in order to perform this modification.

Steps:
Go to the site you would like to implement this on.
Go to a discussion, click the Ellipses (...) and then the Edit choice and you should see a layout similar to this where the "Body" textarea is fairly narrow.

You'll want to edit the page this form displays in, which is why I've highlighted the gear in the upper right hand corner, click on it and then select "Edit Page":
 

Now you will want to click the "Add a Web Part", we're going to be adding a Content Editor



After you click "Add a Web Part", chose Media and Content, then Content Editor and click "Add":
 









Now you will see your page again with your newly added Content Editor web part. Click the "Click here to add new content":






It will then place your cursor in the Content Editor, but what we want to do is edit the HTML, so select the Edit Source button as shown here:







In the HTML Source editing window, type the following code:


Your window should now look like this:






Click the "OK" button at the bottom of the HTML Source window. Then you are back on the edit page, click the "Page" on the ribbon and select "Stop Editing" as shown here:






It will redirect you to the discussion list, so go back into a discussion, click the Ellipses and Edit, and now your text areas should be wider like this:










Happy Customizing.

Friday, August 17, 2012

Checking VPN logs... CiscoSecure ACS v4.0

I was asked to create a "simple time clock" front end that integrates with our users HR data (downloaded nightly in a data pull I built from HRB). One of the potential pitfalls I pointed out to the person requesting the interface and data objects was that since most of our technical staff have VPN access turned on by default, they could potentially clock in or out from home.

So I started my search to determine if a user was on our network from their desk or via VPN. My network engineer pointed me to the interface showing the logs. I saw how simple the output was and believed it was just reading from a text file. Sure 'nuf, found the text file locally on the domain controllers. There are probably multiple ways to determine how someone is connected, but I couldn't come up with any off the top of my head after a brief pow-wow with my network engineer, so this is the direction I went.

I found the domain controllers that held the logs (.csv files) for passed authentications which was located in ProgramFiles\CiscoSecure ACS v4.0\Logs\Passed Authentications and the files were named: Passed Authentications Active.csv. Ahhh, data, data, data...

When a user logs into the simple time clock application, I check these log files for the user name and text "Remote Access (VPN)" to determine if it's been more than a certain amount of time, say an hour, to attempt to verify this web app is not being accessed by someone connected via VPN.

An over simplified schema of how I solved this problem:

If you need more details, let me know.

Tuesday, July 24, 2012

SSRS Prompting for login when deploying reports or data sources (SSRS 2008)

While migrating SSRS 2005 reports to a new SSRS 2008 server, I had to recreate the project in Visual Studio, add the data source and the .rdl files because my original project files became corrupted (long story).

I saved a few .rdl files to the new folder, created a new and updated .rds (shared data source) and attempted to deploy to the new server, to a subfolder from the main reportserver folder.

I was repeatedly prompted for my login. After some futzing around, I attempted to strip all the sub folder items out of the project properties and I found I could deploy to the main reportserver folder so I thought I'd take a screen shot of the correct configuration for future reference.
















What I had been doing is adding after /reportserver/[name of sub folder], where in reality you need to add that to the "TargetReportFolder" and also if you want your data source to be under that subfolder, update the "TargetDataSourceFolder" to the subfolder/data sources url as shown here.

Hope it helps.

Thursday, March 15, 2012

SQL DBA Best Practices - Why separate tempdb?

I've had discussions with server/system architects attempting to justify not only separating the database and log files, but also the tempdb to separate disks. 

This is one of those that more applies to SQL Server 2008 and beyond.

Did you know that tempdb not only stores temporary tables, but SQL Server also utilizes it for grouping and sorting operations, cursors, the version store supporting snapshot isolation level, and overflow for table variables? 

Hopefully knowing these other objects that can be highly utilized even if you do not believe your databases utilize a large amount of temporary objects will arm you with ammunition to vote for more and different physical disks for tempdb.

Disk cost cannot dwarf the speed benefit you will have, especially in data warehouse situations where some BI solutions create very large temp tables to join on before returning results.

Thursday, November 10, 2011

TSQL - Refresh all orphaned users of a database after a restore

I've used this script for years, not sure where I pilfered it from but putting this here for safekeeping.

Thursday, July 7, 2011

Lovin' the OUTPUT clause...

Change is good, even if it appears big and scary at first…

Ever find a MUCH easier way of doing things and just fell in LOVE with it? It was that way for me once I discovered table valued functions in SQL Server. From that moment on I saw everything as a function “hill” to climb. My happiest moment: turning a query which took over 90 seconds down to 3 seconds because of this discovery and a subsequent re-architecture of all database calls.

Fast forward to… now… the OUTPUT clause. I know, I know, been around since SQL 2005 but until you wrap your head around it, create a new database or have time to re-architect something old, you can’t really get the implications until you incorporate it in a new database design.

One call to a database to update or delete can not only save your data, but OUTPUT the data you want to save in another table for, oh I don’t know, change management, historical, CYA implications.

Simple usage:

DELETE FROM [table] OUTPUT deleted.* INTO [tablearchive] FROM [table1] t1 JOIN [table] ON t1.ID=[table].table1_ID WHERE ID = 123

Double duty – deleting (or updating or inserting for that matter) and saving whatever details from that statement into another table. Saving yourself either an additional call from the front end, or at the very least, an additional SQL statement.

Just keeping it simple.

Monday, June 13, 2011

Find Columns Named ? in SQL Database (SQL Server 2005 and above)

I've used this script for many, many years. I'm sure I borrowed the majority of it from a site. It comes in handy when I'm looking for all references of a particular column in a database.

Honestly, I use this one several times a month, so putting it here for safekeeping.

Tuesday, March 8, 2011

System.Data.SqlClient.SqlException: Timeout expired

You are most likely getting this because you are leaking connections, a good rule to follow is borrowed from Angel Saenz-Badillos: blog post.

public void DoesNotLeakConnections()
{
     Using (SqlConnection sqlconnection1 = new SqlConnection("Server=.\\SQLEXPRESS ;Integrated security=sspi;connection timeout=5"))
     {
          sqlconnection1.Open();
          SqlCommand sqlcommand1 = s          qlconnection1.CreateCommand();
          sqlcommand1.CommandText = "raiserror ('This is a fake exception', 17,1)";
          sqlcommand1.ExecuteNonQuery(); //this throws a SqlException every time it is called.     sql     connection1.Close(); //Still never gets called.     }// Here sqlconnection1.Dispose is _guaranteed_}

Tuesday, January 4, 2011

SQL Server SSIS: The connection "{SSIS Object ID, crazy long string}” is not found.

I would “hanker” a guess you copied a connection into a new SSIS Package, changed the name and successfully ran the package from Visual Studio. Yet, deploying it to the server produced the above error and when you search for the ID of the object the error message specifies, you can’t find it. Am I right? Even with a visual search through the IDs, you are unable to find the pesky ID stated in the error in your package, correct?

If so, try searching the entire solution for the ID – you should find the original object you copied and the search will also give you a hint of what object it was copied to that you need to reproduce.

In the package, do a Edit -> Find and Replace -> Entire Solution and paste in the annoying and mysterious ID in the error message received from the SQL Server Agent job that fails. This search should reveal at least two, most likely four lines where this ID shows up. Select which object is in the .dtsx package which is failing. If you scroll to the right in the Find Results screen, you should find the user friendly name of the object. It will be something like:

</DTS:Property DTS:Name=”ObjectName”>Some Object Name</DTS:Property>

You can also view the package contents by finding it with explorer and opening it with Notepad to reveal the XML. In this view search for the string of your object id not being found, and this will show you your friendly name for the object.

Now you have the user friendly name of the object. You should RE-create (from scratch – do not copy/paste) the connection. Use this new connection in the task(s) necessary, then delete the old connection, rebuild and deploy. Test again, if possible, from SQL Server Agent.

From my research it seems that Visual Studio has issues sometimes with copying connections from one package to another. Sometimes it keeps the old ID in the manifest or logging, and sometimes it even erases the ID and coughs up blood.

Hopefully, helpfully yours… Lara

Tuesday, November 30, 2010

Connecting to a Web Service, you CAN set timeout on the client...


This information would have been helpful, ohhhhh, a few months back, after countless (and random, of course) timeout occurrences and lots of hair loss and shoulder shrugging.

Background: I built a .net console application that downloads our HR data from a “major” HR provider. This project consisted of security certificates (purchasing and exchanging) on all provider server as well as client server; logins and passwords and research, research, research. The company provided sample code, which basically showed how to query one module at a time and one module at a time.

But my task was to download all the data for all employees every day, including data for terminated employees. We have over 600 active employees, which employee has, obviously, multiple rows in most, and sometimes more than one, in each module of data. I call them modules, for lack of a better term. This equates to downloading approximately 70,000 records once a day. The connection and download takes approximately 9 minutes. Of that 9 minutes I’d say a few are spent waiting for a response, I’d guess that the actual workings take about 5 minutes, not bad I think but I don’t have much to compare it to at this point.

The provider does not provide a “changed” mechanism (or at least they did not tell me they have that mechanism – although many of the hurdles I came across on this journey were questions that when asked multiple times were simple answers – maybe the question just had to be asked of the right person?). Anyway, I stray… One of the first errors I would receive, randomly, would be a timeout attempting to download data from a module. And one particular module would always timeout so that I had to break up the requests by choosing an additional filter.

So, seemed to be pretty stable with a timeout occurring once a week or two, I could handle that, especially since when I asked them the question and sent them the error messages, I was given the response “it’s not on our side”… Time passed, and timeouts increased to a point where I couldn’t get that one module, previously broken down into four filtered requests, to even budge.

I sought empathy from our network engineer to monitor the traffic back and forth and also re-wrote most of the application to save the SQL insert commands to a lengthy string and execute after each module completed, so I wasn’t shooting off single SQL insert statements continually. Still nothing seemed to help. I narrowed it down, with debug printable statements, and could show that there was over a 1.5 minute delay from when a request was sent and a response on that one pesky module. Everything I’d researched stated the timeout value was set inside IIS on the provider server.

After compiling my error list and debug statements into a very convincing e-mail to our HR application provider, I finally received a response from them telling me to up my timeout setting to 5 minutes… Huh?

And this is what you get with someone who learns C# from a book and sample code, over having someone who is an expert mentor you…

Simple addition of this line in each module call to download data:

[ServiceName] proxy = new [ServiceName]();
proxy.Timeout = 300000;

Yep, just that simple. Months of frustration being fixed with 23 characters. Sometimes you feel like an idiot at the end of the day. Live and learn.

Thursday, November 4, 2010

Determine Size of All Tables in a database...

This script has come in handy a few times, enough to warrant me posting this so I don't have to remember where I put it. Elephant wishes...

Wednesday, September 22, 2010

Find Slow Running Queries - great T-SQL script find!

If you've ever used sp_who and/or sp_who2, you'll appreciate the insight this free script Adam Machanic wrote and shared, sp_whoisactive. Way more useful than the out of the box scripts.

There is a great video tutorial and link to the free download here: brentozar, and even a way to save the results over time to a table.