Concatenating a list of strings in T-SQL

Remember how clumsy it was to concatenate string identifiers of child records within a sub-SELECT and adding this artificial FOR XML PATH('')? (if you don’t remember)

Writing plain SQL again after a long time, I found the “new” function STRING_AGG (introduced in SQL Server 2017) to be a great simplification. I guess mainly because the sub-select can be inlined and the whole SELECT can be GROUP BY’d.

Case in point, I needed to find out which database tables have unique indexes defined, and which columns they indexed.

Here’s the magic:

select o.name, i.name,
    STRING_AGG(c.name, ', ') WITHIN GROUP (ORDER BY ic.index_column_id)
from sys.index_columns ic
inner join sys.indexes i 
    on ic.object_id = i.object_id and ic.index_id = i.index_id
inner join sys.objects o on i.object_id = o.object_id
inner join sys.columns c 
    on ic.object_id = c.object_id and ic.column_id = c.column_id
where o.is_ms_shipped = 0 
and o.name not like '%_history'
and i.is_primary_key = 0
and i.is_unique = 1
group by o.name, i.name
order by 1, 2

Constructing Type-Safe SQL Statements from ORM Classes

Suppose you have a database application, and the database tables are mapped onto C# classes as used by an ORM such as Entity Framework or NHibernate.

Suppose you need to construct an SQL statement manually, because your ORM does not support or implement (or interface to) a given SQL feature.

Of course, you can always write the SQL statement manually, and query data using EF’s Database.SqlQuery() or NHibernate’s CreateSQLQuery().

The problem I found is that as soon as the data model changes, these manually crafted SQL statements are bound to fail.

Let’s have a look at a simple SELECT statement involving a JOIN of two tables (of course ORMs manage such a query, this is just an illustration):

SELECT s.Name AS SupplierName, a.ZipCode AS AddressZipCode, 
       a.City AS AddressCity, a.Street AS AddressStreet
FROM  Supplier s
INNER JOIN Address a ON s.AddressId = a.Id

The statement includes table names, table aliases, table column names, and column aliases, and I want to construct the column names for the SQL statement from the ORM’s mapped classes.

Gladly I already have the GetPropertyName() function to retrieve as class’s property names, so we can focus on enumerating them:

public static string EnumerateColumns<T>(
    string tablePrefix, 
    string columnPrefix, 
    params Expression<Func<T, object>>[] columns)
{
    return string.Join(", ", 
        columns.Select(c =>
            (tablePrefix != null ? (tablePrefix + ".") : "") +
            GetPropertyName(c) +
            (columnPrefix != null 
                ? (" AS " + columnPrefix + GetPropertyName(c)) 
                : "")));
}

So we have a function handling optional table aliases and optional column name prefixes, which we can invoke for every joined table of our statement:

var sql = "SELECT "
    + EnumerateColumns<Supplier>("s", "Supplier", s => s.Id, s => s.Name)
    + ", "
    + EnumerateColumns<Address>("a", "Address", 
        a => a.ZipCode, a => a.City, a => a.Street)
+ @"
FROM " + nameof(Supplier) + @" s
INNER JOIN " + nameof(Address) + " a ON s." 
    + nameof(Supplier.AddressId) + " = a." + nameof(Address.Id);

Console.WriteLine(sql);

There is a little bit of cheating hidden in this code, assuming that table names match their class names, and column names match their property names. If they names differ, you can mark up the class declaration with some attribute and query it using GetCustomAttributes(), or EF’s pluralization service (if it is used).

The full code for this article available on my GitHub.

Updating TortoiseGit to use github’s Personal Access Token

I recently got github’s email on their changes to authentication tokens, but of course ignored as long as I could.

We recently updated the format of our API authentication tokens, providing additional security benefits to all our customers.

So today was the day to update my local authentication token in TortoiseGit.

I headed over to github’s Personal Access Token page to generate a new token.

But it was not really clear how to add it as authentication for repositories existing on my local PC.

After some google-fu, I opened the TortoiseGit Settings dialog in the root directory of one of my repositories,

TortoiseGit credentials (right-click on repository directory)

changed the Credential helper to Advanced, and entered my github username.

The next TortoiseGit/Push… command open an authentication dialog where I copied the github token in the password field.

After the push was completed, I could verify that the new token was added in Windows under Control Panel / Accounts / Credential Manager / Windows Credentials

Updated Windows credentials

Renaming files after their time stamp in Ubuntu

Downloading data files from certain web sites, the data files usually either are already named after their creation timestamp, or they end with subsequent numbering (1), (2), … in their names.

To rename such numbered files, I found that the date -r command displays a file’s modification timestamp, which can be formated with the +format option:

date -r somefilename.txt +%Y%m%d

To iterate over all downloaded files, I use

for f in file*name*pattern* do

Putting it all together, I came up with the one-liner

for f in pattern*; do mv $f `date -r $f +filename_%Y%m%d`.csv; done

Handling __ivy_ngcc_bak compiler errors

An Angular project I work on uses some custom libraries from a private repository. When making changes to the library, it is necessary to test locally, before publishing to the repository.

So how do you test your changes locally? I found it sufficient to copy the result of the ng-packagr script into the library’s directory of the project’s node_modules directory, run ng build, and you’re done.

This changed when Angular Ivy came along as we made the switch to Angular 10.

Suddenly, calling ng build after copying the packagr files resulted in multiple warnings stating

WARNING in Unable to fully load D:/path/to/web/node_modules/weblibrary/lib/filename.d.ts for source-map flattening: Circular source file mapping dependency: D:/path/to/web/node_modules/weblibrary/lib/filename.d.ts.map -> D:/path/to/web/node_modules/weblibrary/lib/filename.d.ts.map

and an error message

ERROR in Tried to overwrite D:/path/to/web/node_modules/weblibrary/lib/filename.d.ts.__ivy_ngcc_bak with an ngcc back up file, which is disallowed.

Well, I checked, and indeed, the file existed. Let’s delete the *.__ivy_ngcc_bak files, and run ng build again. I also found it necessary to delete the library’s __ivy_ngcc__ directory in the target project.

Run ng build again, and only the warnings are output. Run ng build once more, and the warnings are gone.


As I prepared this post, I wondered whether I had missing a solution that already exists for this problem.

I found npx install-from which seems to present itself as an alternative to npm link. I tried it, but unfortunately it stopped with an error message

D:\path\to\web>npx install-from D:\path\to\weblibrary\dist\weblibrary
npx: installed 5 in 3.016s
(node:14136) UnhandledPromiseRejectionWarning: Error: spawn npm ENOENT
at Process.ChildProcess._handle.onexit (internal/child_process.js:267:19)
at onErrorNT (internal/child_process.js:469:16)
at processTicksAndRejections (internal/process/task_queues.js:84:21)
(node:14136) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). To terminate the node process on unhandled promise rejection, use the CLI flag --unhandled-rejections=strict (see https://nodejs.org/api/cli.html#cli_unhandled_rejections_mode). (rejection id: 1)
(node:14136) [DEP0018] DeprecationWarning: Unhandled promise rejections are deprecated. In the future, promise rejections that are not handled will terminate the Node.js process with a non-zero exit code.

without any indication what might have gone wrong.

Note that if you run a module which is not installed locally, npx will download it every time from your configured repository. If you use a package more often, better install it locally running npm install -g package.


So I checked again, and found that npm install also supports installation from a folder, not only from repository.

Running npm install D:/path/to/web/node_modules/weblibrary/package replaced the library’s <DIR> entry under node_modules with a <JUNCTION> entry (i.e. symlink) pointing to the path given as parameter:

Install the package in the directory as a symlink in the current project. Its dependencies will be installed before it’s linked.

In the package directory, installing the package adds a node_modules directory, and running ng build also updates the package.json file in the package directory. The application’s package.json entry for the package is updated from a version-specific reference to the library in the repository to a “file:…” reference to the package directory.


Now it became clear what the install-from is trying to do:

  • run npm pack in the library directory to create a .tgz
  • run npm install from the .tgz in the application directory
  • it fails somewhere

As a work-around to recreate the functionallity of import-form, I call npm pack package-directory from the library’s directory, which creates a library-version.tgz file, and npm install from the .tgz in the application directory.


So I came up with 3 methods to update a library in an application:

  • Clean-up Ivy artifacts and inject ng-packagr result using xcopy
  • npm install from library’s ng-packagr directory
  • npm pack to .tgz and npm install from .tgz

Use awk to grep

I have gawk installed on my Windows 10 machine, but no grep, but still needed to quickly find text in some files.

So I came up with (read: “found on the internet”) this one-liner

@for /f "eol=: delims=" %F in ('dir /b /s [directory]') do
@awk "/[search string]/ {print $0}" %F

Usually I use grepWin for such tasks, but only later realized that a simply Ctrl-C would copy the selected search result to the clipboard – grepWin’s context menu does not have a menu item for this function, even though it has been suggested in a related issue.

I also noticed grepWin has a memory problem if you search huge files on a machine with little RAM. On the other hand, it searches using the Windows codepage rather than the DOS codepage.

Fixing the –startvm Error Message

After upgrading Ubuntu from 18.04 to 20.04, I noticed that my VM .desktop shortcut throws the error message

–startvm is an option for the VirtualBox VM runner (VirtualBoxVM) application, not the VirtualBox Manager.

Before the upgrade, it simply started the virtual machine referenced as parameter value.

It seems that VirtualBox moved the --startvm parameter from the previous VirtualBox executable to VirtualBoxVM. More infos and links can be found in this VirtualBox ticket.

The (easy) solution was to open the .desktop file in an editor, and change the line

Exec=/usr/lib/virtualbox/VirtualBox ....

to

Exec=/usr/lib/virtualbox/VirtualBoxVM ....

Finding unused SpecFlow step implementations with PowerShell

SpecFlow steps are public methods adorned using the [Given], [When], and [Then] attributes inside classes marked with the [Binding] attribute.

To load the assembly containing the compiled SpecFlow steps, I found it necessary to load all other assemblies inside the bin directory. Otherwise an error would occur in the assembly’s GetTypes() method stating the references to other assemblies could not be resolved.

$asm = [System.Reflection.Assembly]::LoadFrom("$basedir\$specdll", 
$null, 0)
$types = $asm.GetTypes()

Next, the SpecFlow attribute types are retrieved

$binding = [TechTalk.SpecFlow.BindingAttribute]
$given = [TechTalk.SpecFlow.GivenAttribute]
$when = [TechTalk.SpecFlow.WhenAttribute]
$then = [TechTalk.SpecFlow.ThenAttribute]

along with all [Binding] classes and there methods with the attributes listed above.

The attributes are collected in an array, as their Regex property contains the regular expression that will be used to search the .feature files using PowerShell’s select-string cmdlet.

:Outer foreach($a in $attributes) {    
foreach($ff in $featurefiles) {
        $found = $featurefilecontents[$ff] | 
select-string -Pattern $a.Regex -CaseSensitive
        if ($found.Matches.Length -gt 0) {
            #write-host "found $($a.Regex) in file $ff"
            $foundattributes += $a
            continue Outer
        }
    }
    write-host "did not found reference for $($a.GetType().Name) $($a.Regex)"
    $notfoundattributes += $a
}

The script has been developed to analyze a project using SpecFlow 2.4.0, and is available on my PowerShell Snippets repository.

Invoking app/web.config File Transformations on Build

MSBuild provides the functionality to generate a production web.config (or app.config) from the developer’s web.config merged with a web.*.config transformation file named after the current Configuration. This web.config File Transformation is applied when you Deploy a (web) application.

But what if you want to apply the same file transformation during build? The scenario occurs if a team checks in the web.config file, but needs customization for each developer of the team (e.g. different local SQL connection strings, log directories etc).

Digging through the various .targets files that configure the VS build system, I found a reference to the <TransformXml> command in Microsoft.Web.Publishing.targets deep inside the Visual Studio installation directory. (see the answers on this SO question)

Putting it all together, I came up with an AfterBuild step in the .csproj file which applies the web.config transformation on build, rather than on deploy:

  <Target Name="AfterBuild">
	<TransformXml Source="web.common.config"
	        Transform="web.onbuild.config"
	        Destination="web.config" />
  </Target>

(add the parameter $(Configuration) where necessary.)

Fixing the Multi-Column Sort behavior of a Kendo UI Grid

Kendo UI for Angular contains a Grid component which also supports sorting multiple columns.

The sample grid only contains 3 columns, and sorting by multiple columns does not have much effect, but this is only about the sorting behavior, not the data being sorted.

As you check the “Enable multiple columns sorting” box, you’ll notice a behavior that I find counter-intuitive:

  • click on the Product Name column, and the grid is sorted by Product Name, showing 1 sort arrow
  • click on the ID column, and the grid is sorted by Product Name and then ID, indicated by the column order next to the sort arrows (Product Name – 1, ID – 2)
  • click on the Product Name column again, to reverse the sort order of the Product Name column.

What do you expect?

What the Kendo UI Grid does, it changes the order of columns so that the column clicked last ends up being the last in the order of columns.

My understanding is that the user wanted to change the direction of the sorted column (ascending vs. descending), and not change the order of the sorted columns.

Fortunately, the <kendo-grid> component provides a (sortChange) event, where you can implement your favorite sort behavior.

I created a multi-column sort sample on StackBlitz with my preferred sort behavior, the code can be viewed and forked here.