Renaming files after their time stamp in Ubuntu

Downloading data files from certain web sites, the data files usually either are already named after their creation timestamp, or they end with subsequent numbering (1), (2), … in their names.

To rename such numbered files, I found that the date -r command displays a file’s modification timestamp, which can be formated with the +format option:

date -r somefilename.txt +%Y%m%d

To iterate over all downloaded files, I use

for f in file*name*pattern* do

Putting it all together, I came up with the one-liner

for f in pattern*; do mv $f `date -r $f +filename_%Y%m%d`.csv; done

Handling __ivy_ngcc_bak compiler errors

An Angular project I work on uses some custom libraries from a private repository. When making changes to the library, it is necessary to test locally, before publishing to the repository.

So how do you test your changes locally? I found it sufficient to copy the result of the ng-packagr script into the library’s directory of the project’s node_modules directory, run ng build, and you’re done.

This changed when Angular Ivy came along as we made the switch to Angular 10.

Suddenly, calling ng build after copying the packagr files resulted in multiple warnings stating

WARNING in Unable to fully load D:/path/to/web/node_modules/weblibrary/lib/filename.d.ts for source-map flattening: Circular source file mapping dependency: D:/path/to/web/node_modules/weblibrary/lib/filename.d.ts.map -> D:/path/to/web/node_modules/weblibrary/lib/filename.d.ts.map

and an error message

ERROR in Tried to overwrite D:/path/to/web/node_modules/weblibrary/lib/filename.d.ts.__ivy_ngcc_bak with an ngcc back up file, which is disallowed.

Well, I checked, and indeed, the file existed. Let’s delete the *.__ivy_ngcc_bak files, and run ng build again. I also found it necessary to delete the library’s __ivy_ngcc__ directory in the target project.

Run ng build again, and only the warnings are output. Run ng build once more, and the warnings are gone.


As I prepared this post, I wondered whether I had missing a solution that already exists for this problem.

I found npx install-from which seems to present itself as an alternative to npm link. I tried it, but unfortunately it stopped with an error message

D:\path\to\web>npx install-from D:\path\to\weblibrary\dist\weblibrary
npx: installed 5 in 3.016s
(node:14136) UnhandledPromiseRejectionWarning: Error: spawn npm ENOENT
at Process.ChildProcess._handle.onexit (internal/child_process.js:267:19)
at onErrorNT (internal/child_process.js:469:16)
at processTicksAndRejections (internal/process/task_queues.js:84:21)
(node:14136) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). To terminate the node process on unhandled promise rejection, use the CLI flag --unhandled-rejections=strict (see https://nodejs.org/api/cli.html#cli_unhandled_rejections_mode). (rejection id: 1)
(node:14136) [DEP0018] DeprecationWarning: Unhandled promise rejections are deprecated. In the future, promise rejections that are not handled will terminate the Node.js process with a non-zero exit code.

without any indication what might have gone wrong.

Note that if you run a module which is not installed locally, npx will download it every time from your configured repository. If you use a package more often, better install it locally running npm install -g package.


So I checked again, and found that npm install also supports installation from a folder, not only from repository.

Running npm install D:/path/to/web/node_modules/weblibrary/package replaced the library’s <DIR> entry under node_modules with a <JUNCTION> entry (i.e. symlink) pointing to the path given as parameter:

Install the package in the directory as a symlink in the current project. Its dependencies will be installed before it’s linked.

In the package directory, installing the package adds a node_modules directory, and running ng build also updates the package.json file in the package directory. The application’s package.json entry for the package is updated from a version-specific reference to the library in the repository to a “file:…” reference to the package directory.


Now it became clear what the install-from is trying to do:

  • run npm pack in the library directory to create a .tgz
  • run npm install from the .tgz in the application directory
  • it fails somewhere

As a work-around to recreate the functionallity of import-form, I call npm pack package-directory from the library’s directory, which creates a library-version.tgz file, and npm install from the .tgz in the application directory.


So I came up with 3 methods to update a library in an application:

  • Clean-up Ivy artifacts and inject ng-packagr result using xcopy
  • npm install from library’s ng-packagr directory
  • npm pack to .tgz and npm install from .tgz

Use awk to grep

I have gawk installed on my Windows 10 machine, but no grep, but still needed to quickly find text in some files.

So I came up with (read: “found on the internet”) this one-liner

@for /f "eol=: delims=" %F in ('dir /b /s [directory]') do
@awk "/[search string]/ {print $0}" %F

Usually I use grepWin for such tasks, but only later realized that a simply Ctrl-C would copy the selected search result to the clipboard – grepWin’s context menu does not have a menu item for this function, even though it has been suggested in a related issue.

I also noticed grepWin has a memory problem if you search huge files on a machine with little RAM. On the other hand, it searches using the Windows codepage rather than the DOS codepage.

Fixing the –startvm Error Message

After upgrading Ubuntu from 18.04 to 20.04, I noticed that my VM .desktop shortcut throws the error message

–startvm is an option for the VirtualBox VM runner (VirtualBoxVM) application, not the VirtualBox Manager.

Before the upgrade, it simply started the virtual machine referenced as parameter value.

It seems that VirtualBox moved the --startvm parameter from the previous VirtualBox executable to VirtualBoxVM. More infos and links can be found in this VirtualBox ticket.

The (easy) solution was to open the .desktop file in an editor, and change the line

Exec=/usr/lib/virtualbox/VirtualBox ....

to

Exec=/usr/lib/virtualbox/VirtualBoxVM ....

Finding unused SpecFlow step implementations with PowerShell

SpecFlow steps are public methods adorned using the [Given], [When], and [Then] attributes inside classes marked with the [Binding] attribute.

To load the assembly containing the compiled SpecFlow steps, I found it necessary to load all other assemblies inside the bin directory. Otherwise an error would occur in the assembly’s GetTypes() method stating the references to other assemblies could not be resolved.

$asm = [System.Reflection.Assembly]::LoadFrom("$basedir\$specdll", 
$null, 0)
$types = $asm.GetTypes()

Next, the SpecFlow attribute types are retrieved

$binding = [TechTalk.SpecFlow.BindingAttribute]
$given = [TechTalk.SpecFlow.GivenAttribute]
$when = [TechTalk.SpecFlow.WhenAttribute]
$then = [TechTalk.SpecFlow.ThenAttribute]

along with all [Binding] classes and there methods with the attributes listed above.

The attributes are collected in an array, as their Regex property contains the regular expression that will be used to search the .feature files using PowerShell’s select-string cmdlet.

:Outer foreach($a in $attributes) {    
foreach($ff in $featurefiles) {
        $found = $featurefilecontents[$ff] | 
select-string -Pattern $a.Regex -CaseSensitive
        if ($found.Matches.Length -gt 0) {
            #write-host "found $($a.Regex) in file $ff"
            $foundattributes += $a
            continue Outer
        }
    }
    write-host "did not found reference for $($a.GetType().Name) $($a.Regex)"
    $notfoundattributes += $a
}

The script has been developed to analyze a project using SpecFlow 2.4.0, and is available on my PowerShell Snippets repository.

Invoking app/web.config File Transformations on Build

MSBuild provides the functionality to generate a production web.config (or app.config) from the developer’s web.config merged with a web.*.config transformation file named after the current Configuration. This web.config File Transformation is applied when you Deploy a (web) application.

But what if you want to apply the same file transformation during build? The scenario occurs if a team checks in the web.config file, but needs customization for each developer of the team (e.g. different local SQL connection strings, log directories etc).

Digging through the various .targets files that configure the VS build system, I found a reference to the <TransformXml> command in Microsoft.Web.Publishing.targets deep inside the Visual Studio installation directory. (see the answers on this SO question)

Putting it all together, I came up with an AfterBuild step in the .csproj file which applies the web.config transformation on build, rather than on deploy:

  <Target Name="AfterBuild">
	<TransformXml Source="web.common.config"
	        Transform="web.onbuild.config"
	        Destination="web.config" />
  </Target>

(add the parameter $(Configuration) where necessary.)

Fixing the Multi-Column Sort behavior of a Kendo UI Grid

Kendo UI for Angular contains a Grid component which also supports sorting multiple columns.

The sample grid only contains 3 columns, and sorting by multiple columns does not have much effect, but this is only about the sorting behavior, not the data being sorted.

As you check the “Enable multiple columns sorting” box, you’ll notice a behavior that I find counter-intuitive:

  • click on the Product Name column, and the grid is sorted by Product Name, showing 1 sort arrow
  • click on the ID column, and the grid is sorted by Product Name and then ID, indicated by the column order next to the sort arrows (Product Name – 1, ID – 2)
  • click on the Product Name column again, to reverse the sort order of the Product Name column.

What do you expect?

What the Kendo UI Grid does, it changes the order of columns so that the column clicked last ends up being the last in the order of columns.

My understanding is that the user wanted to change the direction of the sorted column (ascending vs. descending), and not change the order of the sorted columns.

Fortunately, the <kendo-grid> component provides a (sortChange) event, where you can implement your favorite sort behavior.

I created a multi-column sort sample on StackBlitz with my preferred sort behavior, the code can be viewed and forked here.

Retrieving SP and CU of installed SQL Server instances

Microsoft provides a list of Latest updates for Microsoft SQL Server which gives you the latest Cumulative Update (CU) number for each Service Pack (SP) of each version of SQL Server since 2000.

But how do you check which version of SQL Server you have installed on your machines?

Sure, there internets are full of SQL scripts which retrieve this information (MS, SO, TN, etc), but those scripts require you to connect to every SQL Server instance and query it individually.

If you have direct access to the machine running SQL Server (such as your development machine), wouldn’t it be nice if you saw the list of installed SQL Server versions, whether the instances are running or stopped?

So I typed along in the PowerShell ISE, retrieving all executables named sqlservr.exe (the executable hosting SQL Server), retrieving their VersionInfo, and outputting the relevant information:

$basedir = "C:\Program Files\Microsoft SQL Server"
$exes = Get-ChildItem -Path $basedir -Filter sqlservr.exe -Recurse -ErrorAction SilentlyContinue -Force


$exes | foreach-object -Process {
$ip = Get-ItemProperty -Path $_.FullName
$vi = $ip.VersionInfo
[pscustomobject]@{ ProductVersion = $vi.ProductVersion;

FileVersion = $vi.FileVersion;
DirectoryName = $_.DirectoryName;
LastWriteTime = $_.LastWriteTime; }
} | format-table

So, as a first step, I have the ProductVersion numbers.

As I found while researching this script, Microsoft also provides a list of product versions (“build versions”) indicating SQL Server version, Service Pack and Cumulative Update. The list can be downloaded as .xlsx from aka.ms/SQLServerbuilds.

So I have the installed product versions as list of PS objects, and the build numbers in .xlsx format, let’s combine both.

Fortunately, there is a PS library to read .xlsx files called ImportExcel (GitHub), and you install it by simply running

Install-Module ImportExcel -Scope CurrentUser

(you probably need to update PowerShell’s nuget, which is done semi-automatically in the process)

As it turned out, ImportExcel is PS code wrapping OfficeOpenXml.ExcelPackage, which I have dealt with in previous C# projects, so you do not have to have Excel installed when parsing the downloaded .xlsx.

The script uses Get-ExcelSheetInfo to query all the worksheets of the Excel file, and for each worksheet runs Import-Excel to retrieve the worksheet’s data into a hashtable indexed by the build number.

Finally, the original procedure is extended to lookup the ProductVersion number in the hashtable, and merge the Excel data with the original result:

Both versions of the script are available on my GitHub PowerShell repository.

Remove Unused References with Visual Studio 2017

I liked the Reference Assistant extension very much, but unfortunately it only works for Visual Studio versions 2010 through 2013, and I always wanted to adapt the extension to later versions of Visual Studio.

Now I did.

So I forked the original code from the repository on GitHub (really originally hosted on CodePlex).

Since I had no experience in working with the VSSDK, I created a dummy project and clicked my way through it, and tried to figure out, where the original code hooked into VS, and how to migrate that code from 2010 to 2017.

One of the main obstacles was the setup of the Options page, which seems to have changed fundamentally. Fortunately, the Extensibility Samples also contain a project named Options covering the Tools->Options dialog.

Of course, migration also means upgrading .Net versions (implied by creating a VS2017 extension project), replacing assembly references with NuGet packages, and a little bit of tuning and tweaking.

You can find the upgraded Reference Assistant on GitHub, the first release version is here.

What I like about this tool is that is lets you preview the changes it is going to make, and exclude references from removal you know are required, as the tool does not consider occasional implied references and assemblies referenced in .config files.

Importing mysqldump into MS SQL Server

To import data from MySQL into a MS SQL database, we need to first analyze the output of mysqldump, the backup utility of MySQL. I have covered the most obvious differences to T-SQL in my previous post.

Fortunately, the output of mysqldump is line-oriented, i.e. every CREATE TABLE statement, table column or index, and INSERT statement is in a separate line.

In a brute-force approach, we can therefore parse each line using regular expressions, and do not require full parsing involving tokenizer and grammar.

I found a couple of interesting points that need to be considered when generating T-SQL statements:

  • As T-SQL did not support “DROP IF EXISTS” before SQL Server 2016, we need to provide the classical variant “IF OBJECT_ID() IS NOT NULL”
  • NVARCHAR(MAX) columns cannot be indexed
  • BINARY data needs to be 0x-encoded
  • UNIQUE INDEXes should not include NULLable columns
  • Multi-row INSERT INTO statements cannot contain more than 1000 rows
  • ‘0000-00-00’ is a valid date in MySQL, but not in MSSQL. The value is being replaced by ‘1800-01-01’ to avoid the error message

    The conversion of a varchar data type to a datetime data type resulted in an out-of-range value.

Some indexes may require manually editing column collations, e.g. for unique case-sensitive MySQL indexes.

When executing T-SQL statements in SSMS, a number of warnings or errors can occur:

  • A warning that indexed data is supported to a maximum length of 960 bytes
  • Inserting data may result in the error “String or binary data would be truncated.”
  • Some files are too big to execute in SSMS, and need to be executed using sqlcmd
  • A batch with too many too large INSERT INTO statements may raise the error

    There is insufficient system memory in resource pool ‘default’ to run this query

    so we break up the batch using “GO”

  • Multiple cascade paths

MS SQL Server does not support ON DELETE CASCADE clauses such that the grandchild record of a parent-child-grandchild relation cannot be uniquely identified. Creating such a FOREIGN KEY raises the error message

Msg 1785, Level 16, State 0, Line 1
Introducing FOREIGN KEY constraint ‘fk_xxx’ on table ‘xxx’ may cause cycles or multiple cascade paths. Specify ON DELETE NO ACTION or ON UPDATE NO ACTION, or modify other FOREIGN KEY constraints.

In such cases, either drop the ON DELETE CASCADE if possible, and replace the deletion cascade with an INSTEAD OF DELETE trigger.

I developed mysqldump2mssql, a prototype implementing these rules, which is available on my GitHub. The program has been developed and tested against MySQL dumps of MediaWiki, Joomla, Bugzilla and Nextcloud.

The tool generates up to 4 files for each table defined in the source SQL file:

  • create.sql: the CREATE TABLE statement, optional CREATE UNIQUE INDEX statements
  • data.sql: the INSERT INTO statements
  • fk.sql: the FOREIGN KEY relations to other tables
  • fulltext.sql: the CREATE FULLTEXT statement

The filenames of the generated files consist of the sequence number of execution, the table name, and one of the endings mentioned in above table.