linkedin

  • Strong Name an Assembly Without Source Code

    Because I program in the 1970’s, part of what I do is programming against modems and serial ports. Specifically, we send medical claims over modem to adjudicators, in the event of unavailable internet access. As a customer, you’ll appreciate having your pharmacy delay you for 20 seconds rather than saying “Sorry; you can’t have your medicine.”

    While taking a break from disco dancing, I was trying to add a strong name to one of our projects. One of the requirements, however, for strong naming is that all of the assemblies you reference must be strong named as well. All are, except one: COMM-DRV/Lib.Net. We use this assembly to talk to the modems, because it abstracts a lot of the serial communications and allows us to program against a logical “modem”.

    I called the vendor, Willies Computer Software Company, and he said that we’d have to have the source license to strong name, but I asked why they don’t strong name their own binary distribution. He just kept saying they don’t do that, and we’d have to buy the source license. We didn’t need to make changes to their code, though; in fact, I already did that by inheriting their class and hiding some methods with new ones.

    So, I set about trying to find out how to strong name an assembly for which you do not have the source. The short answer is “you can’t.” The longer answer is “You can’t, unless you disassemble it, reassemble it, while signing at the same time, and only if it’s not obfuscated.” I finally found the answer I was looking for at geekzilla.

    Here are the steps you need (back up your old assembly):

    Create a Key Pair

    This is only necessary, of course, if you don’t already have one generated for yourself or your company.

    sn.exe -k C:\Path\To\KeyPair.snk

    Disassemble the Assembly into IL

    ildasm.exe CdrvLibNet.dll /out:CdrvLibNet.il

    Reassemble the Assembly from IL While Signing

    ilasm.exe CdrvLibNet.il /dll /output=CdrvLibNet-StrongNamed.dll /key=C:\Path\To\KeyPair.snk

    The Result

    Thankfully, everything worked great, because their assembly was not obfuscated (it’s basically a .Net wrapper around their C++ unmanaged, native DLL product, so no real intellectual property to steal here). Now, we have their product strong named with our public key, and we are able to build on top of it with a strong named product of our own.


  • How Insensitive

    Update: Added section at the bottom detailing what not to do.

    As part of the updater process that I wrote for a current project, a “boot strapper” program queries the database for available versions, and if there is a newer version available, it deletes the current program folder, replaces those files with the new files, and then executes the main program executable. If there are no newer versions, it simply executes the main program executable immediately. This gives the user they are executing the program itself, instead of the “invisible” boot strapper, but we are able to easily manage updates in this manner.

    Obviously, if you are a .Net developer, you know about the venerable app.config, which is used to set program parameters at run-time, rather than at design time. Because these can contain changes to our connection strings and custom configuration sections, we wanted to preserve this and other special files that may be used in the future. As our solution, we created a whitelist of files that we do want to replace, that looked like this:

            ///
            /// Extensions of files that may be deleted during uninstall/reinstall.
            ///
            private List _deletableExtensions = new List(
                new string[] { ".exe", ".dll", ".pdb", ".chm", ".manifest" });
    
            private bool canDeleteOrOverwrite(FileInfo file)
            {
                // Only delete files with specific extensions.
                bool canDelete = _deletableExtensions.Contains(file.Extension);
    
                return canDelete;
            }

    Before anybody murders me, we camel case private methods and underscore prefix and camel case private class-level variables here. As you can see, this will allow deletion of executables, class libraries, debug symbols, help files, and manifests. However, we had a problem in my code that didn’t surface until yesterday. We were trying to figure out why one file out of all the files in the directory was remaining an older version, “ActiveReports3.DLL”.

    What may be obvious to the reader, especially considering the title of this post, is that my List<T>.Contains() check, by default, is case sensitive, and “.DLL” != “.dll”. This required quite a bit of stepping through code to find, as I wrote this code a long time ago. A simple press of “Ctrl+Shift+Space” revealed that there was an overload of IEnumerable<T>.Contains(T item, IEqualityComparer<T> comparer).

    Because I’ve had to do this in the past, I knew that, because my T in this case was string, all I needed to do was use the StringComparer, which implements the IEqualityComparer<string> interface. Because it is filenames, and we are working with Visual Studio generated files, we care neither about culture nor case in this instance. Here is the revised, working version:

            ///
            /// Extensions of files that may be deleted during uninstall/reinstall.
            ///
            private List _deletableExtensions = new List(
                new string[] { ".exe", ".dll", ".pdb", ".chm", ".manifest" });
    
            private bool canDeleteOrOverwrite(FileInfo file)
            {
                // Only delete files with specific extensions.
                bool canDelete = _deletableExtensions.Contains(
                    file.Extension,
                    StringComparer.InvariantCultureIgnoreCase);
    
                return canDelete;
            }

    That’s it! That tiny change fixed our issue and allowed the boot strapper to overwrite that file. It’s amazing that a simple little omission like that can lead to such a strange problem manifesting itself. Everybody makes mistakes; I just thought I’d showcase one of my errors and how I fixed it.

    After talking to one of my coworkers about this post, he mentioned something that we have both seen done to “work around” this particular problem, which should not be done. This is what I have seen before in others’ code:

            ///
            /// Extensions of files that may be deleted during uninstall/reinstall.
            ///
            private List _deletableExtensions = new List(
                new string[] { ".exe", ".dll", ".pdb", ".chm", ".manifest" });
    
            private bool canDeleteOrOverwrite(FileInfo file)
            {
                bool canDelete = false;
    
                // Only delete files with specific extensions.
                foreach (string currentExtension in _deletableExtensions)
                {
                    if (currentExtesion.ToLower() == file.Extension.ToLower())
                    {
                        canDelete = true;
                        break;
                    }
                }
    
                return canDelete;
            }

    This creates a LOT of strings on the heap in the process is and is very inefficient. It’s much better to let .Net handle itself and just let it know whether you care about culture and/or case sensitivity.


  • Happy 123456789 Day!

    123456789!Happy 123456789 day! It just passed 12:34:56 7/8/9 right now in the United States (central time zone)!

    Hooray for arbitrary dates, but at least that one won’t happen like that for a while. For our friends almost everywhere else in the world, you have August 7th to look forward to for your 123456789 day.


  • A Day Late and a Property Declaration Short

    I know this is a bit late to the game, but this morning, as I’m refactoring a bunch of old code to be shared with a new project, I’m cleaning up the C# 2.0 property declarations we all know and love:

        public class SaleResponse : Response
        {
            protected bool _addressVerificationSuccess;
            public virtual bool AddressVerificationSuccess
            {
                [DebuggerStepThrough]
                get { return _addressVerificationSuccess; }
                [DebuggerStepThrough]
                set { _addressVerificationSuccess = value; }
            }
    
            protected bool _cvvVerificationSuccess;
            public virtual bool CvvSuccess
            {
                [DebuggerStepThrough]
                get { return _cvvVerificationSuccess; }
                [DebuggerStepThrough]
                set { _cvvVerificationSuccess = value; }
            }
    
            protected string _authorizationCode = string.Empty;
            public virtual string AuthorizationCode
            {
                [DebuggerStepThrough]
                get { return _authorizationCode; }
                [DebuggerStepThrough]
                set { _authorizationCode = value; }
            }
    
            protected string _referenceNumber = string.Empty;
            public virtual string ReferenceNumber
            {
                [DebuggerStepThrough]
                get { return _referenceNumber; }
                [DebuggerStepThrough]
                set { _referenceNumber = value; }
            }
    
            protected decimal? _authorizedAmount;
            public virtual decimal? AuthorizedAmount
            {
                [DebuggerStepThrough]
                get { return this._authorizedAmount; }
                [DebuggerStepThrough]
                set { this._authorizedAmount = value; }
            }
        }

    This code is, of course because I wrote it, wonderful and has no flaws, and I used the prop snippet to create the private variable and public property getter and setter. However, I have all those DebuggerStepThrough attributes in there, due to the lovely fun of stepping through code that references properties. That avoids stepping in and out of the property declarations.

    Thank the flying spaghetti monster that now, in C# 3.0 (and of course 3.5, 4.0 and on), Microsoft has given us auto-implemented properties. This is now the equivalent code, avoiding the stepping in/out of the get/set and not requiring the declaration of a private variable:

        public class SaleResponse : Response
        {
            public virtual bool AddressVerificationSuccess { get; set; }
            public virtual bool CvvSuccess { get; set; }
            public virtual string AuthorizationCode { get; set; }
            public virtual string ReferenceNumber { get; set; }
            public virtual decimal? AuthorizedAmount { get; set; }
        }

    When I first learned this trick, I was extremely thankful to be able to do this as an even shorter shortcut to using the prop snippet and it makes the code a lot prettier. What I didn’t immediately realize is how to control scoping. Let’s say, for instance, I want that AuthorizationCode property to only be able to be set from inside the class itself, and I only want the AuthorizedAmount property to be able to get set from inside the class and inherited classes. I can then change those declarations like this:

        public class SaleResponse : Response
        {
            public virtual bool AddressVerificationSuccess { get; set; }
            public virtual bool CvvSuccess { get; set; }
            // This can now only be set from inside the class
            public virtual string AuthorizationCode { get; private set; }
            public virtual string ReferenceNumber { get; set; }
            // This can now only be set from inside the class and those that inherit from it
            public virtual decimal? AuthorizedAmount { get; protected set; }
        }

    This allows you to control the scoping of the getter and setter independently. By default, they inherit the visibility of the property declaration, in this case, public for all. I’m always thankful for the tools Microsoft gives me to make my life easier and to give me more time to spend on the actual work, rather than installing plumbing.


  • Out of Memory Exception While Attempting to Do SQL CLR

    Update: We figured out how to make it work with the help of our DBA and Jonathan Kehayias (see comments). We increased SQL Server’s MEM_TO_LEAVE property, by adjusting the –g command line switch for the service, to 448MB. This increase of the shared memory pool gave SQL Server enough breathing room for its worker threads, and now we are able to return 1536 records in 1 minute 21 seconds, including the Bitmap conversion.

    Furthermore, we found a workaround using our reporting engine, which we will implement when we have some breathing room ourselves in our rollout timeline. This will “get it working” for now, which is the requirement handed down from above. Soon though, we will have this working in our application tier, where it belongs.

    SQL Server’s CLR abilities are really cool. I have done some benchmarking, performing the same computations in both T-SQL and CLR and I have found CLR to outperform T-SQL by factors of greater than 10 to 1. It’s fantastic for this use.

    Unfortunately, it has won a battle against me today. I’ll provide a bit of background first. I am currently working on the Point of Sale component of my company’s pharmacy system, and in particular, I am writing all of the code associated with interacting with the signature pad. I’ve abstracted everything nicely, such that we can support multiple pad’s, and I just have to write a .Net class that implements the ISignaturePad interface.

    Obviously, other than the navigational aspects of buttons and listboxes and sale line items being displayed on the pad, capturing signatures themselves is paramount. Because each signature pad can spew out the signature data in a different way, and we want to store the “perfect” vector information, I’ve abstracted the signature data into two classes. The Signature and SignaturePoint classes’ definitions look like this:

        [Serializable]
        public partial class Signature
        {
            public virtual SignaturePoint[] Points { get; protected set; }
            public virtual SignaturePoint TopLeft { get; set; }
            public virtual SignaturePoint BottomRight { get; set; }
            public int Width
            {
                get
                {
                    return BottomRight.X - TopLeft.X;
                }
            }
            public int Height
            {
                get
                {
                    return BottomRight.Y - TopLeft.Y;
                }
            }
            public int XDpi { get; set; }
            public int YDpi { get; set; }
    
            private Signature()
            {
            }
    
            public static Signature CreateFromTT8500String(string signaturePoints)
            {
                var sig = new Signature();
                sig.Points = sig.CreateFromTT8500PointsData(signaturePoints);
                return sig;
            }
    
            public virtual byte[] Serialize()
            {
                using (var ms = new MemoryStream())
                {
                    var bf = new BinaryFormatter();
                    bf.Serialize(ms, this);
                    return ms.ToArray();
                }
            }
    
            public static Signature Deserialize(byte[] serializedSignatureBytes)
            {
                using (var ms = new MemoryStream(serializedSignatureBytes))
                {
                    var bf = new BinaryFormatter();
                    return (Signature)bf.Deserialize(ms);
                }
            }
    
            protected virtual SignaturePoint[] CreateFromTT8500PointsData(string signaturePoints)
            {
                List list = new List();
    
                // Do a lot of work here to change the strange format that we get
                // as a string into bytes and transform them into an array of my custom class
    
                return list.ToArray();
            }
    
            protected void CropPoints(List list)
            {
                foreach (var point in list)
                {
                    point.X -= TopLeft.X;
                    point.Y -= TopLeft.Y;
                }
    
                BottomRight.X -= TopLeft.X;
                BottomRight.Y -= TopLeft.Y;
                TopLeft.X = 0;
                TopLeft.Y = 0;
            }
        }
    
        [Serializable]
        public class SignaturePoint
        {
            public int X { get; set; }
            public int Y { get; set; }
            public bool PenUp { get; set; }
    
            public SignaturePoint()
            {
            }
    
            public SignaturePoint(int x, int y)
                : this(x, y, false)
            {
            }
    
            public SignaturePoint(int x, int y, bool penUp)
                : this()
            {
                X = x;
                Y = y;
                PenUp = penUp;
            }
    
            public Point ToPoint()
            {
                return new Point(X, Y);
            }
    
            public override string ToString()
            {
                return ToPoint().ToString();
            }
        }

    As you might have noticed, Signature is marked as Serializable, and that’s exactly what we’re doing to store the “perfect” information in the database. We call the Serialize() method on my Signature class, and store the resulting byte array in the database as VARBINARY(MAX). It works fine when we pull that back with ADO.NET and re-hydrate a Signature object with my static Deserialize() method.

    To actually draw a signature on a picture box on a Windows form for example, we call my ToBitmap() method, that is in another file (partial class), and it generates a bitmap of the requested width, height, and pen width, suitable for display on a receipt, screen, report, etc.

    However, as a limitation of our ridiculous reporting engine (and we are currently trying to work around it’s oddities), for an upcoming beta, we are trying to get SQL server to create the bitmaps for passing back up to our reporting engine. Yes, I do know that is application tier logic and shouldn’t be performed at the database level. We are still trying to work around it using custom controls with the reporting engine.

    So, I create a SQL CLR scalar function in my Signature class (SqlBytes CreateBitmap(SqlBytes serializedSignatureBytes, SqlInt32 width, SqlInt32 height)), moved Signature and SignaturePoint to a CompanyName.SignaturePad.Common assembly, added a reference to System.Drawing. I added the assembly to SQL Serverand fought it a bit (setting TRUSTWORTHY to ON for the database). I had to manually add System.Drawing as well, because the version on my computer didn’t match exactly on the server, yet another pain and indication I shouldn’t be doing this. And another indication was SQL server warning me that System.Drawing hadn’t been tested and that the universe will indeed explode if they change something in it. I accept the risks, at the moment.

    Everything worked great… for 3 signatures. As soon as the 4th signature is added, I get this:

    Msg 6532, Level 16, State 49, Line 1
    .NET Framework execution was aborted by escalation policy because of out of memory.
    System.Threading.ThreadAbortException: Thread was being aborted.
    System.Threading.ThreadAbortException:
       at System.Drawing.Graphics.CheckErrorStatus(Int32 status)
       at System.Drawing.Graphics.DrawImage(Image image, Int32 x, Int32 y, Int32 width, Int32 height)
       at CompanyName.SignaturePad.Common.Signature.ToBitmap(Int32 width, Int32 height, Int32 penWidth, Color foregroundColor, Color backgroundColor)
       at CompanyName.SignaturePad.Common.Signature.ToBitmap(Int32 width, Int32 height)
       at CompanyName.SignaturePad.Common.Signature.CreateBitmap(SqlBytes serializedSignatureBytes, SqlInt32 width, SqlInt32 height)

    Looking at that code, I’m doing everything that I know to do. I’m making sure to Dispose() all Bitmaps, Graphics, Streams, etc. I even tried explicitly setting this to null. SQL Server just runs out of memory after about 3 signatures. I’m not sure if this is a side effect of its execution plan (perhaps forking to multiple workers?).

    So… now I’m stuck, and I’m not really sure what to do, until we get a workaround in the reporting engine, which someone else is currently working on. I’ll update this post if we get a working solution.


  • Comcast’s New Network Management Technique: Protocol Agnostic

    comcastlogo I got an email from Comcast titled "Improving Your Online Experience Through Congestion Management". After my previous experiences I’ve had with Comcast lately, I thought "great, now they’re going to screw over my bandwidth too". Generally, I’m pretty satisfied with their Internet service, even though their TV service sucks. I actually like their Powerboost, and I don’t think I’m paying too much money for the service.

    The email had no details in it whatsoever, and the FAQ said (emphasis mine):

    [The technique] will identify which customer accounts are using the greatest amounts of bandwidth and their Internet traffic will be temporarily managed until the period of congestion passes.

    The "managed" bit scared me a little bit. Saying you’re going to "manage" me for downloading too much sounds a bit like you’re going to send a hit man to my house to take me out. I decided to dig further and noticed they had a link to documents the filed with the FCC at the top right of the Comcast network management page. I had to read through 20 pages of boring crap, which is probably actually necessary to explain simple concepts to the FCC, in order to figure out exactly what this technique employs as its counter-congestion tactics.

    I plan to explain, in simple terms, how the new management technique works. Please ask any questions in the comments section.

    The important thing to realize about the new technique versus the old technique is that the new technique is protocol agnostic, meaning it doesn’t discriminate against certain kinds of traffic versus other types. For example, it doesn’t prioritize web surfing or VoIP calling above using BitTorrent to download the latest blockbuster movie *cough* Linux distribution release.

    Users hated Comcast for the previous incarnation of their network management, and the FCC forced them to stop it and come up with another protocol agnostic scheme. What they ended up with, is in my opinion, a fair process that doesn’t penalize anyone unless the network is in serious danger of being degraded, and even then, doesn’t actually do anything until it actually is degraded. I am, in a word, satisfied.

    All information and pictures that follow are from Comcast’s document and are probably copyrighted by Comcast, even though they were submitted to the FCC. I have a reasonable basis to believe I can reproduce this information for informative purposes.

    Comcast Network Design First, you must understand how your home cable connection reaches the Internet. Your home coax (screw on cable that connects to your modem) connects to an optical node in your neighborhood area, which then connects to a CMTS (Cable Modem Termination System) at the headend (local office), which then connects to a series of routers and finally to Comcast’s Internet backbone connections.

    Each of the CMTS boxes have a number of ports on them, with separate ports for downstream (stuff you download) and upstream (stuff you upload back to the Internet). Comcast says that there are about 275 cable modems, on average, sharing a downstream port, and about 100 sharing an upstream port. Your cable modem has what is called a "bootfile" which is assigned by the DOCSIS protocol on startup. The bootfile contains a lot of information about your cable service, perhaps most importly, how fast you can download/upload.

    The traffic management occurs based on activity at the CMTS ports and is actually applied with a combination of flags set on your cable modem and those flags being processed on Comcast’s routers.

    Comcast Upstream Congestion Management Decision Flowchart The process is really quite simple. Here’s how it works:

    1. Each port (keeping in mind it is either upload or download) is monitored independently.
    2. Each cable modem has a flag for its current state
      1. PBE – Priority Best Effort. The default state.
      2. BE – Best Effort. A lower priority state. PBE traffic is prioritized below BE.
    3. If a port reaches "Near Congestion State", which means that it has average over a certain threshold of utilization over a 15 minute period, network management will commence.
      1. Downstream threshold: 80% utilization
      2. Upstream threshold: 70% utilization
    4. The network searches for users on that CMTS port that are in an "Extended High Consumption State", which means they have averaged over a certain threshold of utilization over a 15 minute period.
      1. The user’s modem is set to BE.
      2. Downstream and upstream threshold: 70% utilization
    5. The network keeps the user in "Extended High Consumption State" until the user’s average utilization has dropped below the threshold for 15 minutes
      1. User’s modem is set back to PBE.
      2. Downstream and upstream threshold: 50% utilization
    6. When in the BE state, all PBE traffic will be processed by the Comcast Internet routers before the BE traffic, regardless of the type of traffic, however the likelihood of it reaching a congested status is very low, and even in that case, the probability of dropped traffic is even lower.

    This means, while doing my large USENET downloads, if the network is in danger of being congested, I’ll likely be set to BE. However, Comcast has been using several trial markets with this new technique, and they have this to say:

    For example, in Colorado Springs, CO, the largest test market, on any given day in August 2008, an average of 22 users out of 6,016 total subscribers in the trial had their traffic priority status changed to BE at some point during the day.

    NetForecast, Inc. explored the potential risk of a worst-case scenario for users whose traffic is in a BE state:  the possibility of “bandwidth starvation” in the theoretical case where 100 percent of the CMTS bandwidth is taken up by PBE traffic for an extended period of time.  In theory, such a condition could mean that a given user whose traffic is designated BE would be unable to effectuate an upload or download (as noted above, both are managed separately) for some period of time.  However, when these management techniques were tested, first in company testbeds and then in our real-world trials conducted in the five markets, such a theoretical condition did not occur.

    Comcast High Level Managemetn Flows

    I think it’s a very sane system, and I hope they continue to add bandwidth so that the network doesn’t get congested in the first place. However, I think this system is very fair, and a lot better than the previous one.

    Below, I reproduce the original email they sent me:

    (more…)


  • MCTS and Some Other Stuff

    I am writing this post because of Jen. She pointed out that I haven’t blogged in forever. I thought it was worth posting though, if for nothing else than to show up Chris’ blogging abilities.

    I passed the Microsoft 70-536 and 70-528 tests. That means I’m MCTS certified on Web, Windows, and SQL Server 2005. Distributed is next.

    Also, LSUS (my college) is having an awards ceremony at 12PM on April 13th, 2006 and they’re awarding me two things. I’ve been selected as the “Outstanding CIS Student for the Academic Year”, and also I was on the Chancellor’s list for both semesters last year (3.8 or above or something). They’ve offered me the honor, so I’m going to honor their offer.

    Chris (same Chris as the non-blogging one) and I went to Circle K today for our daily ICEEs (can I pluralize it like that?). While the lady was scanning my card, I saw two little girls (roughly 12-14 years old) talking to Chris and I thought Chris was hitting on 12 year olds. After she scanned my card and Chris was waiting on his to be scanned (dial-up credit card processing), I walked over to find out why Chris was doing that. I was like “What’s funny?” since the girls were laughing, and they said it was that Chris has a Batman credit card and was wearing a Superman shirt and they thought his shoes were Batman shoes. Quickly they lost interest in that and started hitting on me saying they knew me and stuff and that she had a bunch of sisters and I used to go over one one of their houses a couple of years ago or something. I have no idea what she was talking about, but Chris and I left. Weird little girls. (Chris and I are 25 and 24 respectively at the time of this writing).

    Aside from those happenings, you should watch this video. It’s hilarious. I AM A SCIENTIST! For my nerd friends, Mike posted a video on PXE booting. You will probably enjoy that.

    I am going to take a vacation soon I think, since I have to use 2 weeks this year and I know I’ll use about 1 week between Thanksgiving and Christmas. I’m pretty sure I’m going to go to DFW and hang out with Mike and Courtney. Courtney got a Wii, and Wii would like to play.

    You’re welcome Jen. 🙂


  • Enabling SQL Cache Dependency With Schema Solution

    My solution has now been included in a published book about ASP.NET for SQL Server

    This post is about programming and ASP.NET / Microsoft SQL Server 2005, so any of my friends who aren’t programmers won’t find this interesting. With that said…

    If you’re reading this post, it’s more than likely that you’ve encountered the “Cannot create trigger ‘dbo.” error when trying to enable a SQL cache dependency on a table. I will explain how to fix this. If you’re lazy, skip to the bottom. If you want to know the source of the problem, continue reading.

    This is the full error text:

    C:\Program Files\Microsoft Visual Studio 8\VC>aspnet_regsql -et -E -d DatabaseName -t SchemaName.TableName -S SERVER_NAME
    
    Enabling the table for SQL cache dependency.
    
    .An error has happened.  Details of the exception:
    Cannot create trigger 'dbo.SchemaName.TableName_AspNet_SqlCacheNotification_Trigger' as its schema is different from the schema of the target table or view.
    
    Failed during cache dependency registration.
    
    Please make sure the database name and the table name are valid. Table names must conform to the format of regular identifiers in SQL.
    
    The failing SQL command is:
    dbo.AspNet_SqlCacheRegisterTableStoredProcedure

    The failing stored procedure is provided right there, and I didn’t even see that the first time, so I found what it was running in the SQL Profiler. That was stupid of me, but I still found it. The offending code in that stored procedure if you go and view it (you must enable the database for caching using “-ed” before you can see the stored procedure) is the following:

    ALTER PROCEDURE [dbo].[AspNet_SqlCacheRegisterTableStoredProcedure] 
                 @tableName NVARCHAR(450) 
             AS
             BEGIN
             DECLARE @triggerName AS NVARCHAR(3000) 
             DECLARE @fullTriggerName AS NVARCHAR(3000)
             DECLARE @canonTableName NVARCHAR(3000) 
             DECLARE @quotedTableName NVARCHAR(3000) 
             /* Create the trigger name */ 
             SET @triggerName = REPLACE(@tableName, '[', '__o__') 
             SET @triggerName = REPLACE(@triggerName, ']', '__c__') 
             SET @triggerName = @triggerName + '_AspNet_SqlCacheNotification_Trigger' 
             SET @fullTriggerName = 'dbo.[' + @triggerName + ']' 
             /* Create the cannonicalized table name for trigger creation */ 
             /* Do not touch it if the name contains other delimiters */ 
             IF (CHARINDEX('.', @tableName) <> 0 OR 
                 CHARINDEX('[', @tableName) <> 0 OR 
                 CHARINDEX(']', @tableName) <> 0) 
                 SET @canonTableName = @tableName 
             ELSE 
                 SET @canonTableName = '[' + @tableName + ']' 
             /* First make sure the table exists */ 
             IF (SELECT OBJECT_ID(@tableName, 'U')) IS NULL 
             BEGIN 
                 RAISERROR ('00000001', 16, 1) 
                 RETURN 
             END 
             BEGIN TRAN
             /* Insert the value into the notification table */ 
             IF NOT EXISTS (SELECT tableName FROM dbo.AspNet_SqlCacheTablesForChangeNotification WITH (NOLOCK) WHERE tableName = @tableName) 
                 IF NOT EXISTS (SELECT tableName FROM dbo.AspNet_SqlCacheTablesForChangeNotification WITH (TABLOCKX) WHERE tableName = @tableName) 
                     INSERT  dbo.AspNet_SqlCacheTablesForChangeNotification 
                     VALUES (@tableName, GETDATE(), 0)
             /* Create the trigger */ 
             SET @quotedTableName = QUOTENAME(@tableName, '''') 
             IF NOT EXISTS (SELECT name FROM sysobjects WITH (NOLOCK) WHERE name = @triggerName AND type = 'TR') 
                 IF NOT EXISTS (SELECT name FROM sysobjects WITH (TABLOCKX) WHERE name = @triggerName AND type = 'TR') 
                     EXEC('CREATE TRIGGER ' + @fullTriggerName + ' ON ' + @canonTableName +'
                           FOR INSERT, UPDATE, DELETE AS BEGIN
                           SET NOCOUNT ON
                           EXEC dbo.AspNet_SqlCacheUpdateChangeIdStoredProcedure N' + @quotedTableName + '
                           END
                           ')
             COMMIT TRAN
             END
    

    The offending code is:

    SET @fullTriggerName = 'dbo.[' + @triggerName + ']' 
    

    What’s happening is that the stored procedures that are created by the aspnet_regsql tool do not handle schemas, which are new to 2005. It automatically assumes you’re using the default schema “dbo”. I’ve modified Microsoft’s stored procedure to be able to handle schemas gracefully.

    All you need to do is change the DECLARE block and the “Create the trigger name” blocks to:

    DECLARE @triggerName AS NVARCHAR(3000) 
    DECLARE @fullTriggerName AS NVARCHAR(3000)
    DECLARE @canonTableName NVARCHAR(3000) 
    DECLARE @quotedTableName NVARCHAR(3000) 
    DECLARE @schemaName NVARCHAR(3000)
    /* Detect the schema name */
    IF CHARINDEX('.', @tableName) <> 0 AND CHARINDEX('[', @tableName) = 0 OR CHARINDEX('[', @tableName) > 1
        SET @schemaName = SUBSTRING(@tableName, 1, CHARINDEX('.', @tableName) - 1)
    ELSE
        SET @schemaName = 'dbo'
    /* Create the trigger name */
    IF @schemaName <> 'dbo'
        SET @triggerName = SUBSTRING(@tableName,
            CHARINDEX('.', @tableName) + 1, LEN(@tableName) - CHARINDEX('.', @tableName))
    ELSE
        SET @triggerName = @tableName
    SET @triggerName = REPLACE(@triggerName, '[', '__o__') 
    SET @triggerName = REPLACE(@triggerName, ']', '__c__') 
    SET @triggerName = @triggerName + '_AspNet_SqlCacheNotification_Trigger' 
    SET @fullTriggerName = @schemaName + '.[' + @triggerName + ']'
    

    If you’re lazy or you just want to go ahead and fix it, here are links to the full original Microsoft version and my modified version that works. Just run the code in my modified version after you enable the database and then you should be able to enable any table, including those that have a schema.

    If anyone finds problems with these, please let me know. I tried to test them, but it’s possible there might be scenarios for which I’ve not accommodated.


  • Netgear Routers Die from Overheating

    Netgear routers really suck. They get really hot and then just stop functioning until you turn them off and let them cool down. This affects the wireless and not the wired portion usually. I’m only using a WRT614 router as an 802.11g access point, but it’s been totally unstable. I found out that I am not alone. Almost everyone who gets one of these has it start overheating at some point.

    So, I went ahead and modded my router by slapping a fan on top, cutting holes, affixing screws and nuts, and soldering the fan power to the internal power input. It isn’t the best looking thing in the world, but it seems to be completely stable now. Check out the pictures over on my Flickr photostream.

    Router Mod

    PS – Sorry for the blurry picture quality. I’ll probably redo these at some point. I just snapped these in low light early this AM when I got home from a friend’s apartment (where I did the drilling and soldering).

    PPS – Yes, I’m a huge nerd.


Posts navigation