Quantcast
Channel: Windows – rakhesh.com
Viewing all 163 articles
Browse latest View live

ADFS with Exchange OWA & ECP (contd.)

$
0
0

This is a continuation to my post from yesterday

While OWA works fine following my post yesterday, I learnt today that ECP does not work for users in the second domain. (To use correct terminology, users in the resource forest are able to login via ADFS and access both OWA & ECP but users in the organization forest are only able to access OWA. With ECP they get an error as below). 

NewImage

Yeah, sucks!

With some trial and error here’s what I learnt:

  • ECP expects the SID passed along to be that of the disabled account in the resource forest (i.e. where Exchange & ADFS are hosted). Since I am passing along the SID from the organization forest it is denying access. 
  • OWA on the other hand expects the SID passed along to be that of the active account in the organization forest (which is what we are currently passing). If one tries to make ECP happy and thus somehow pass the SID of the disabled account in the resource forest OWA complains as below. 

Screen Shot 2018 12 29 at 12 49 19 PM

I managed to work around this a bit. It is not a complete fix and I wouldn’t implement this in a production environment, but since my intention here is more to pick up ADFS than anything else I am happy I came up with this workaround. 

First off I changed my second ADFS server (the one in the organization forest) to not pass along all claims to the relying party trust with the resource forest. This is not really needed, but I had to do this for one more change I wanted to implement and figured it’s best to keep a control on the claims I pass along. As I alluded to in another post from today you can end up with duplicate claims if you pass along all claims and make changes along the way. Better to be picky. Anyways, on the second ADFS server I decided to pass along just 4 claims:

NewImage

At the risk of digressing, the first claim is why I initially decided to stop passing along all claims. I have admin accounts for myself with the same username in both forests, but differing UPNs, and I wanted it to be such that when I visit OWA or ECP with the admin account of the organization forest I am signed in to OWA or ECP of the admin account of the resource forest. That is, when I sign in as admin@two.raxnet.global (organization forest; this account has no mailbox or Exchange rights) ADFS will tell Exchange that this is actually admin@raxnet.global and let me login as the latter. That’s the point of ADFS and claims and all that after all – Exchange doesn’t do any authentication, it simply listens to what the ADFS server tells and I can do all this fun stuff on the ADFS side. 

Thus I created a claim rule like this:

c:[Type == "http://schemas.xmlsoap.org/ws/2005/05/identity/claims/upn"]
=> issue(Type = "http://schemas.xmlsoap.org/ws/2005/05/identity/claims/upn", Issuer = c.Issuer, OriginalIssuer = c.OriginalIssuer, Value = RegExReplace(c.Value, "(?i)@two.raxnet.global", "@raxnet.global"), ValueType = c.ValueType);

It takes an incoming UPN claim, does a regex replace to substitute two.raxnet.global with raxnet.global. Easy peasy. Exchange is none the wiser and thus lets me login as admin@two.raxnet.global and access the mailbox & ECP of admin@raxnet.global. 

Going back to the original issue. I thus have 4 claims as above. 

The replying party trust from the ADFS server in the resource forest to OWA works fine as it is happy with the SID it gets from the organization forest. What. I need is a way to translate the SID of the organization forest to a SID in the resource forest, and pass that to the ECP trust. I don’t need to translate SID -> SID as such; what I really need to do is lookup the account name I am getting from the organization forest and use that to find the SID in the resource forest. When I create the linked mailbox in the resource forest I use the same username as in the organization forest so what I have to do is extract this username from the incoming claims and do a lookup using that. So far so good?

To do this I removed the claim rule on the ECP relying party trust that was passing through SID. (I let the UPN one stay as it is as that’s fine).

Then I added a custom rule like this:

Sanitize Windows account name

c:[Type == "http://schemas.microsoft.com/ws/2008/06/identity/claims/windowsaccountname"]
=> add(Type = "http://raxnet.global/windowsaccountname", Issuer = c.Issuer, OriginalIssuer = c.OriginalIssuer, Value = RegExReplace(c.Value, "(?i)RAXNET2", "RAXNET"), ValueType = c.ValueType);

This takes an incoming windowsaccountname claim and makes a new claim (note, I add not issue – so this claim isn’t output ever) called “http://raxnet.global/windowsaccountname” (just a dummy URI I made up, doesn’t matter) and the value of this claim is the incoming windowsaccountaname but with the domain name replaced (i.e. remove the domain name of the organization forest, replace with the domain name of the resource forest). 

Now I added another custom rule:

Lookup SID

c:[Type == "http://raxnet.global/windowsaccountname"]
=> issue(store = "Active Directory", types = ("http://schemas.microsoft.com/ws/2008/06/identity/claims/primarysid"), query = ";objectSID;{0}", param = c.Value);

This one takes the claim I created above (the windowsaccountname with the domain name changed) and queries AD to find the objectSID attribute of this user account. This gives me the SID of this account in the resource forest. I now return it as a primarysid claim. 

To recap here is what I have:

NewImage

Do this and now ECP works via ADFS! :) Unfortunately if you now try to go to OWA it fails with the error I was getting earlier. :(

NewImage  

The problem here is that both OWA and ECP are in the same domain https://exchange.fqdn/ and so when I switch from OWA to ECP or vice versa I don’t hit ADFS again to re-authenticate. Since my browser already has a previously signed in session’s cookie it tries to access the new URL … and fails! And this is where I hit upon a wall and couldn’t proceed further. I figure if there’s some way to make the browser re-authenticate I can make it go to ADFS again … but I don’t know how to do that. I fiddled around with the cookie settings in IIS on the CAS server but didn’t make any progress. Exchange doesn’t let you use different URLs for OWA and ECP so there’s no way to use a different domain for either these and thus bypass cookies either. Am stuck. 

So that’s it. If anyone comes across this problem and has a better idea I’d be interested in hearing. :)


ADFS across trusted forests

$
0
0

I don’t know why there aren’t any blog posts on ADFS across trusted forests on the Interwebs. I know people are aware of it (we use it at our firm for instance) but whenever it comes to cross forest lookups I only find mention of the new ADFS 4.0 feature of adding a new LDAP claims store as described here or here. That has its advantages in that you don’t need any trust between the forests but assuming you have trust in place there’s an easier method that works in older versions of ADFS too. 

One comes across brief nod to this in posts that talk about the AlternateLoginID feature such as this. But there the emphasis is on the AlternateLoginID rather than cross lookup forests. Even on the help page for the Set-ADFSClaimsProviderTrust the -Lookupforests switch is mentioned as “specifies the forest DNS names that can be used to look up the AlternateLoginID”. 

If you have multiple forests linked together in a trust (like my previous lab examples for instance) all you need to do then is specify the AlternateLoginID to be something that is unique across both forests (UPN in my case) and give the names of the forests to -LookupForests

Here are my two claims provider trusts currently.  

PS C:\> Get-AdfsClaimsProviderTrust | ft Name,Identifier

Name                Identifier
----                ----------
Active Directory    AD AUTHORITY
Branch Office Users http://adfs.two.raxnet.global/adfs/services/trust

Branch Office is my trusted forest with its own ADFS server. I want to change things such as I can use the Active Directory claims provider itself to query the remote forest. All we need to do is run the following on the ADFS server:

Set-AdfsClaimsProviderTrust -TargetName "Active Directory" -AlternateLoginID userPrincipalName -LookupForests raxnet.global,two.raxnet.global

In the -LookupForests switch I specify all my forests (including the one where the ADFS server itself is). When running this cmdlet if you get an error about the LDAP Server being unavailable (I didn’t copy paste the error so I don’t have the exact text, sorry) and you see errors in the Event Logs along these lines “The Security System detected an authentication error for the server ldap/RELDC1.realm.county.contoso.com.  The failure code from authentication protocol Kerberos was “The name or SID of the domain specified is inconsistent with the trust information for that domain. (0xc000019b)”” then check your name suffix routing. You might not expect anything wrong with your name suffix routing (I didn’t) but that is probably the culprit (if you Google for this error that’s all you come across anyways). In my case I had the UPN suffix raxnet.global in both domains and that was causing a conflict (expected) but because the UPN suffix matched one of the domains I think it was causing issues locating the DC of that domain and hence I was getting this error. I removed the conflicting UPN suffix and the cmdlet started working. 

Above I am using the UPN as my AlternateLoginID, but I can use any other attribute too as long as it is indexed and replicated to the Global Catalog (e.g. mail, employeeID).

Check out this blog post for a flowchart on the process followed when using AlternateLoginID. One thing to bear in mind (quoting from that blog post): The AlternateLoginID lookup only occurs for the following scenarios:

  • User signs in using AD FS form-based page (FBA)
  • User on rich client application signs in against username & password endpoint
  • User performs pre-authentication through Web Application Proxy (WAP)
  • User updates the corporate account password with AD FS update password page

What this means is that if Windows Integrated Authentication fails for some reason and you get a prompt to enter the username password (not the Forms Based Page username password fields mind you) and you enter the AlternateLoginID attribute & password correctly authentication will fail. But if you enter the domain\sAmAccountname format and try to authenticate it will work. This is because when WIA fails and you type in the credentials manually it does not seem to be considered as WIA, nor is it FBA, and doesn’t fall under the 4 categories above and AlternateLoginID does not get used. 

Lastly, a cool side effect of using this way of cross-forest ADFS login is that the previous problem I mentioned of ADFS across forests not working with OWA & ECP goes away. :) I am not sure why, but now when I login to OWA from organization forest to resource forest, and then try to access ECP it works fine without any change to the claims. 

[Aside] Enable ADFS Logging

[Aside] Registry keys for Enabling TLS 1.2 etc.

$
0
0

Came across via this Exchange blog post. 

  • Registry keys for enabling TLS 1.2 as default as well as making it available if applications as for it. Also contains keys to enable this for .NET 3.5 and 4.0. 
  • Registry keys for disabling TLS 1.0 and 1.1. 

None of this is new stuff. I have used and seen these elsewhere too. Today I thought of collecting them in one place so I have them handy. 

[Aside] Clearning Credential Manager

TIL: Teams User-Agent String

$
0
0

Today I learnt that Teams too has a User-Agent String, and it defaults to that of the default browser of the OS. In my case, macOS with Firefox as the default, it was using the User-Agent String of Firefox. I encountered this today morning when Teams refused to sign on to our environment via ADFS because it wasn’t doing forms based as it usually did Windows Integrated Authentication, and that was failing because I am not on a domain joined machine. 

All of this happened due to a chain of events. At work enabled WIA SSO on ADFS for Safari and Firefox. At home my VPN client re-connected today morning, causing DNS resolutions to happen via the VPN DNS server than my home router, resulting in ADFS sign on for Teams going via the internal route to the ADFS server at work and since this was set to accept WIA now for Firefox it was defaulting to that instead of forms based authentication. The fix for this was to sort out my DNS troubles … so did that, and here we are! :) Good to know!

Certificates in the time of Let’s Encrypt

$
0
0

Here’s me generating two certs – one for “edge.raxnet.global” (with a SAN of “mx.raxnet.global”), another for “adfs.raxnet.global”. Both are “public” certificates, using Let’s Encrypt. 

PS C:\Windows\system32> New-PACertificate 'edge.raxnet.global','mx.raxnet.global' -AcceptTOS -Contact letsencrypt@rakhesh.com -DnsPlugin Azure -PluginArgs $azParams -DnsAlias 'edge.acme.raxnet.global'
WARNING: Fewer DnsPlugin values than Domain values supplied. Using Azure for the rest.
WARNING: Fewer DnsAlias values that Domain values supplied. Using edge.acme.raxnet.global for the rest.

Subject               NotAfter             KeyLength Thumbprint                               AllSANs
------- -------- --------- ---------- -------
CN=edge.raxnet.global 19-Apr-19 3:39:53 PM 2048 144209DC1CFD00C5136DA145FF71E57DC302CD84 {edge.raxnet.global, mx.raxnet.global}


PS C:\Windows\system32> New-PACertificate 'adfs.raxnet.global' -AcceptTOS -Contact letsencrypt@rakhesh.com -DnsPlugin Azure -PluginArgs $azParams -DnsAlias 'adfs.acme.raxnet.global'

Subject               NotAfter             KeyLength Thumbprint                               AllSANs
------- -------- --------- ---------- -------
CN=adfs.raxnet.global 19-Apr-19 3:40:24 PM 2048 3DF5E9E4AE62A68CCE8F8518329E2975413BFCB2 {adfs.raxnet.global}

Easy peasy!

And here are the files themselves:

PS C:\> Get-PACertificate -List | fl

Subject : CN=adfs.raxnet.global
NotBefore : 19-Jan-19 3:40:24 PM
NotAfter : 19-Apr-19 3:40:24 PM
KeyLength : 2048
Thumbprint : 3DF5E9E4AE62A68CCE8F8518329E2975413BFCB2
AllSANs : {adfs.raxnet.global}
CertFile : C:\Users\Rakhesh\AppData\Local\Posh-ACME\acme-v02.api.letsencrypt.org\49761602\adfs.raxnet.global\cert.cer
KeyFile : C:\Users\Rakhesh\AppData\Local\Posh-ACME\acme-v02.api.letsencrypt.org\49761602\adfs.raxnet.global\cert.key
ChainFile : C:\Users\Rakhesh\AppData\Local\Posh-ACME\acme-v02.api.letsencrypt.org\49761602\adfs.raxnet.global\chain.cer
FullChainFile : C:\Users\Rakhesh\AppData\Local\Posh-ACME\acme-v02.api.letsencrypt.org\49761602\adfs.raxnet.global\fullchain.cer
PfxFile : C:\Users\Rakhesh\AppData\Local\Posh-ACME\acme-v02.api.letsencrypt.org\49761602\adfs.raxnet.global\cert.pfx
PfxFullChain : C:\Users\Rakhesh\AppData\Local\Posh-ACME\acme-v02.api.letsencrypt.org\49761602\adfs.raxnet.global\fullchain.pfx
PfxPass : System.Security.SecureString

Subject       : CN=edge.raxnet.global
NotBefore : 19-Jan-19 3:39:53 PM
NotAfter : 19-Apr-19 3:39:53 PM
KeyLength : 2048
Thumbprint : 144209DC1CFD00C5136DA145FF71E57DC302CD84
AllSANs : {edge.raxnet.global, mx.raxnet.global}
CertFile : C:\Users\Rakhesh\AppData\Local\Posh-ACME\acme-v02.api.letsencrypt.org\49761602\edge.raxnet.global\cert.cer
KeyFile : C:\Users\Rakhesh\AppData\Local\Posh-ACME\acme-v02.api.letsencrypt.org\49761602\edge.raxnet.global\cert.key
ChainFile : C:\Users\Rakhesh\AppData\Local\Posh-ACME\acme-v02.api.letsencrypt.org\49761602\edge.raxnet.global\chain.cer
FullChainFile : C:\Users\Rakhesh\AppData\Local\Posh-ACME\acme-v02.api.letsencrypt.org\49761602\edge.raxnet.global\fullchain.cer
PfxFile : C:\Users\Rakhesh\AppData\Local\Posh-ACME\acme-v02.api.letsencrypt.org\49761602\edge.raxnet.global\cert.pfx
PfxFullChain : C:\Users\Rakhesh\AppData\Local\Posh-ACME\acme-v02.api.letsencrypt.org\49761602\edge.raxnet.global\fullchain.pfx
PfxPass : System.Security.SecureString

The password for all these is ‘poshacme’.

I don’t know if I will ever use this at work but I was reading up on Let’s Encrypt and ACME Certificate Authorities and decided to play with it for my home lab. A bit of work went behind the scenes into the above stuff so here’s some notes.

First off, ACME Certificates are all about automation. You get certificates that are valid for only 90 days, and the idea is that every 60 days you renew them automatically. So the typical approach of a CA verifying your identity via email doesn’t work. All verification is via automated methods like a DNS record or HTTP query, and you need some tool to do all this for you. There’s no website where you go and submit a CSR or get a certificate bundle to download. Yeah, shocker! In the ACME world everything is via clients, a list of which you can find here. Most of these are for Linux or *nix, but there are a few Windows ones too (and if you use a web server like Caddy you even get HTTPS out of the box with Let’s Encrypt). 

To dip my feet I started with Certify a Web, a GUI client for all this. It was fine, but didn’t hook me on much and so I moved to Posh-ACME a purely PowerShell CLI tool. I’ve liked this so far. 

Apart from installing the tool via something like Install-Module -Name Posh-ACME there’s some background work that needs to be done. A good starting point is the official Quick Start followed by the Tutorial. What I go into below is just a rehash for myself of what’s already in the official docs. :)

Requesting a certificate via Posh-ACME is straightforward. Essentially, if I want a certificate for a domain ’sso.raxnet.global’ I would do something like this: 

PS C:\> New-PACertificate 'sso.raxnet.global' -AcceptTOS -Contact admin@rakhesh.com
WARNING: DnsPlugin not specified. Defaulting to Manual.

Please create the following TXT records:
------------------------------------------
_acme-challenge.sso.raxnet.global -> _jTw7ckGRGUj_-eeTdgGBoV5ycjd797R6n2oFOTmaLw
------------------------------------------

At this point I’d have to go and make the specified TXT record. The command will wait 2 mins and then check for the existence of the TXT record. Once it finds it the certificates are generated and all is good. (If it doesn’t find the record it keeps waiting; or I can Ctrl+C to cancel it). My ACME account will be admin@rakhesh.com and the certificate tied to that. 

If I want to be super lazy, this is all I need to do! :) Run this command every 60 days or so (worst case every 89 days), update the _acme-challenge.domain TXT record as requested (the random value changes each time), and bam! I am done. 

If I want to automate it however I need to do some more stuff. Specifically, 1) you need to be on a DNS provider that gives you an API to update its records, and 2) hopefully said DNS provider is on the Posh-ACME supported list. If so, all is good. I use Azure DNS for my domain, and instructions for using Azure DNS are already in their documentation. If I were on a DNS provider that didn’t have APIs, or for whatever reason if I wanted to use a different DNS provider to my main domain, I can even make use of CNAMEs. I like this CNAME idea, so even though I could have used my primary zone hosted in Azure DNS I decided to make another zone in Azure DNS and down the CNAME route. 

So, here’s how the CNAME thing works. Notice above Posh-ACME asked me to create a record called _acme-challenge.sso.raxnet.global? Basically for every domain you are requesting a certificate for (including a name in the Subject Alternative Name (SAN)), you need to create a _acme-challenge.<domain> TXT record with the random challenge given by ACME. However, what you can also do is say have a separate domain like for example ‘acme.myotherdomain’ and I can pre-create CNAME records like _acme-challenge.<whatever>.mymaindomain -> <whatever>.myotherdomain such that when the validation process looks for _acme-challenge.<whatever>.mydomain it will follow it through to <whatever>.myotherdomain and update & verify the record there. So my main domain never gets touched by any automatic process; only this other domain that I setup (which can even be a sub-domain of my main domain) is where all the automatic action happens. 

In my case I created a CNAME from sso.raxnet.global to sso.acme.raxnet.global (where acme.raxnet.global is my Azure DNS hosted zone). I have to create the CNAME record before hand but I don’t need to make any TXT records in the acme.raxnet.global zone – that happens automatically. 

To automate things I then made a service account (aka “App registrations” in Azure-speak) whose credentials I could pass on to Posh-ACME, and whose rights were restricted. The Posh-ACME documentation has steps on creating a custom role in Azure to just update TXT records; I was a bit lazy here and simply made a new App Registration via the Azure portal and delegated it “DNS Contributor” rights to the zone. 

NewImage

NewImage

Not shown in the screenshot, after creating the App Registration I went to its settings and also assigned a password. 

NewImage

That done, the next step is to collect various details such as the subscription ID & tenant ID & and the App Registration name and password into a variable. Something like this:

$azParams = @{
  AZSubscriptionId=‘REPLACE ME';
  AZTenantId=‘REPLACE ME';
  AZAppCred=(Get-Credential)
}

This is a one time thing as the credentials and details you enter here are then stored in the local profile. This means renewals and any new certificate requests don’t require the credentials etc. to be passed along as long as they use the same DNS provider plugin. 

That’s it really. This is the big thing you really have to do to make the DNS part automated. Assuming I have already filling in the $azParams hash-table as above (by copy-pasting it into a PowerShell window after filling in the details and then entering the App Registration name and password when prompted) I can request a new certificate thus:

New-PACertificate 'sso.raxnet.global' -AcceptTOS -Contact admin@rakhesh.com -DnsPlugin Azure -PluginArgs $azParams -DnsAlias 'sso.acme.raxnet.global'

Key points:

  • The -DnsPlugin switch specifies that I want to use the Azure plugin
  • The -PluginArgs switch passes along the arguments this plugin expects; in this case the credentials etc. I filled into the $azParams hash-table
  • The -DnsAlias switch specifies the CNAME records to update; you specify one for each domain. For example, in this case ‘sso.raxnet.global’ will be aliased to ‘sso.acme.raxnet.global’ so the latter is what the DNS plugin will go and update. If I specified two domains e.g. ‘sso.raxnet.global’,’sso2.raxnet.global’ (an array of domains) then I would have had to specify two aliases ‘sso.acme.raxnet.global’,’sso2.acme.raxnet.global’ OR I could just specify one alias ‘sso.acme.raxnet.global’ provided I have created CNAMES from both domains to this same entry, and the plugin will use this alias for both domains. My first example at the beginning of this post does exactly that. 

That’s it! To renew my certs I have to use the Submit-Renewal cmdlet. I don’t even need to run it manually. All I need do is create a scheduled task to run the cmdlet Submit-Renewal -AllAccounts to renew all my certificates tied to the current profile (so if I have certificates under two different accounts – e.g. admin@rakhesh.com and admin2@rakhesh.com but both are in the same Windows account where I am running this cmdlet from, both accounts would have their certs renewed). 

NewImage

What I want to try next is how to get these certs updated with Exchange and ADFS. Need to figure out if I can automatically copy these downloaded certs to my Exchange and ADFS servers. 

Demoting a 2012R2 Domain Controller using PowerShell

$
0
0

Such a simple command. But a bit nerve racking coz it doesn’t have much options and you wonder if it will somehow remove your entire domain and not just the DC you are targeting. :)

Uninstall-ADDSDomainController

You don’t need to add anything else. This will prompt for the new local admin password and proceed with removal. 


Unable to install a Windows Update – CBS error 0x800f0831

$
0
0

Note to self for next. 

Was trying to install a Windows Update on a Server 2012 R2 machine and it kept failing. 

Checked C:\Windows\WindowsUpdate.log and found the following entry:

2B00-40F5-B24C-3D79672A1800}	501	0	wusa	Success	Content Install	Installation Started: Windows has started installing the following update: Security Update for Windows (KB4480963)
2019-01-29 10:27:36:351 832 27a0 Report CWERReporter finished handling 2 events. (00000000)
2019-01-29 10:32:00:336 7880 25e8 Handler FATAL: CBS called Error with 0x800f0831,
2019-01-29 10:32:11:132 7880 27b4 Handler FATAL: Completed install of CBS update with type=0, requiresReboot=0, installerError=1, hr=0x800f0831

Checked C:\Windows\Logs\CBS\CBS.log and found the following:

2019-01-29 10:31:57, Info                  CBS    Store corruption, manifest missing for package: Package_1682_for_KB4103725~31bf3856ad364e35~amd64~~6.3.1.4
2019-01-29 10:31:57, Error CBS Failed to resolve package 'Package_1682_for_KB4103725~31bf3856ad364e35~amd64~~6.3.1.4' [HRESULT = 0x800f0831 - CBS_E_STORE_CORRUPTION]
2019-01-29 10:31:57, Info CBS Mark store corruption flag because of package: Package_1682_for_KB4103725~31bf3856ad364e35~amd64~~6.3.1.4. [HRESULT = 0x800f0831 - CBS_E_STORE_CORRUPTION]
2019-01-29 10:31:57, Info CBS Failed to resolve package [HRESULT = 0x800f0831 - CBS_E_STORE_CORRUPTION]
2019-01-29 10:31:57, Info CBS Failed to get next package to re-evaluate [HRESULT = 0x800f0831 - CBS_E_STORE_CORRUPTION]
2019-01-29 10:31:57, Info CBS Failed to process component watch list. [HRESULT = 0x800f0831 - CBS_E_STORE_CORRUPTION]
2019-01-29 10:31:57, Info CBS Perf: InstallUninstallChain complete.
2019-01-29 10:31:57, Info CSI 0000031d@2019/1/29:10:31:57.941 CSI Transaction @0xdf83491d10 destroyed
2019-01-29 10:31:57, Info CBS Exec: Store corruption found during execution, but auto repair is already attempted today, skip it.
2019-01-29 10:31:57, Info CBS Failed to execute execution chain. [HRESULT = 0x800f0831 - CBS_E_STORE_CORRUPTION]
2019-01-29 10:31:57, Error CBS Failed to process single phase execution. [HRESULT = 0x800f0831 - CBS_E_STORE_CORRUPTION]
2019-01-29 10:31:57, Info CBS WER: Generating failure report for package: Package_for_RollupFix~31bf3856ad364e35~amd64~~9600.19235.1.5, status: 0x800f0831, failure source: Execute, start state: Staged, target state: Installed, client id: WindowsUpdateAgent

So looks like KB 4103725 is the problem? This is a rollup from May 2018. Checked via DISM if it is in any stuck state, nope!

dism /online /get-packages /format:table  | findstr /i "4103725"

I downloaded this update, installed it (no issues), then installed my original update … and this time it worked. 

Deploying Office 2016 language packs (using PowerShell Admin Toolkit)

$
0
0

I need to deploy a language pack for one of our offices via ConfigMgr. I have no idea how to do this! 

What they want is for the language to appear in this section of Office:

NewImage

I don’t know much of Office so I didn’t even know where to start with. I found this official doc on deploying languages and that talked about modifying the config file in ProPlus.WW\Config.xml. I spent a lot of time trying to understand how to proceed with that and even downloaded the huge ISOs from VLSC but had no idea how to deploy them via ConfigMgr. That is, until I spoke to a colleague with more experience in this and realized that what I am really after is the Office 2016 Proofing Toolkit. You see, language packs are for the UI – the menus and all that – whereas if you are only interested in spell check and all that stuff what you need is the proofing tools. (In retrospect, the screenshot above says so – “dictionaries, grammar checking, and sorting” – but I didn’t notice that initially). 

So first step, download the last ISO in the list below (Proofing Tools; 64-bit if that’s your case). 

NewImage

Extract it somewhere. It will have a bunch of files like this:

NewImage

The proofkit.ww folder is your friend. Within that you will find folders for various languages. You can see this doc for a list of language identifiers and languages. In the root of that folder is a config.xml file with the following –

<Configuration Product="Proofkit">

<!-- <Display Level="full" CompletionNotice="yes" SuppressModal="no" AcceptEula="no" /> -->
<!-- <Logging Type="standard" Path="%temp%" Template="Microsoft Office Proofkit Setup(*).txt" /> -->

<!-- <USERNAME Value="Customer" /> -->
<!-- <COMPANYNAME Value="MyCompany" /> -->
<!-- <INSTALLLOCATION Value="%programfiles%\Microsoft Office" /> -->
<!-- <LIS CACHEACTION="CacheOnly" /> -->
<!-- <LIS SOURCELIST="\\server1\share\Office;\\server2\share\Office" /> -->
<!-- <DistributionPoint Location="\\server\share\Office" /> -->
<!-- <OptionState Id="OptionID" State="absent" Children="force" /> -->
<!-- <Setting Id="SETUP_REBOOT" Value="IfNeeded" /> -->
<!-- <Command Path="%windir%\system32\msiexec.exe" Args="/i \\server\share\my.msi" QuietArg="/q" ChainPosition="after" Execute="install" /> -->

</Configuration>

By default this file does nothing. Everything’s commented out as you can see. If you want to additional languages, you modify the config.xml first and then pass it to setup.exe via a command like setup /config \path\to\this\config.xml. The setup command is the setup.exe in the folder itself. 

Here’s my config.xml file which enables two languages and disables everything else.

<Configuration Product="Proofkit">

<!-- <Display Level="full" CompletionNotice="yes" SuppressModal="no" AcceptEula="no" /> -->
<Display Level="none" CompletionNotice="no" SuppressModal="yes" AcceptEula="yes" />
	
<!-- <Logging Type="standard" Path="%temp%" Template="Microsoft Office Proofkit Setup(*).txt" /> -->
<!-- <USERNAME Value="Customer" /> -->
<!-- <COMPANYNAME Value="MyCompany" /> -->
<!-- <INSTALLLOCATION Value="%programfiles%\Microsoft Office" /> -->
<!-- <LIS CACHEACTION="CacheOnly" /> -->
<!-- <LIS SOURCELIST="\\server1\share\Office;\\server2\share\Office" /> -->
<!-- <DistributionPoint Location="\\server\share\Office" /> -->
<!-- <OptionState Id="OptionID" State="absent" Children="force" /> -->


<OptionState Id="ProofingTools_1086" State="local" Children="force" />
<OptionState Id="ProofingTools_1033" State="local" Children="force"/>


<!-- explicitly disable the ones I don't need; probably not necessary but I came across a forum post where the author had to do this -->
<OptionState Id="IMEMain_1028" State="absent" Children="force"/>
<OptionState Id="IMEMain_1041" State="absent" Children="force"/>
<OptionState Id="IMEMain_1042" State="absent" Children="force"/>
<OptionState Id="IMEMain_2052" State="absent" Children="force"/>
<OptionState Id="ProofingTools_1025" State="absent" Children="force"/>
<OptionState Id="ProofingTools_1026" State="absent" Children="force"/>
<OptionState Id="ProofingTools_1027" State="absent" Children="force"/>
<OptionState Id="ProofingTools_1028" State="absent" Children="force"/>
<OptionState Id="ProofingTools_1029" State="absent" Children="force"/>
<OptionState Id="ProofingTools_1030" State="absent" Children="force"/>
<OptionState Id="ProofingTools_1031" State="absent" Children="force"/>
<OptionState Id="ProofingTools_1032" State="absent" Children="force"/>
<OptionState Id="ProofingTools_1035" State="absent" Children="force"/>
<OptionState Id="ProofingTools_1036" State="absent" Children="force"/>
<OptionState Id="ProofingTools_1037" State="absent" Children="force"/>
<OptionState Id="ProofingTools_1038" State="absent" Children="force"/>
<OptionState Id="ProofingTools_1039" State="absent" Children="force"/>
<OptionState Id="ProofingTools_1040" State="absent" Children="force"/>
<OptionState Id="ProofingTools_1041" State="absent" Children="force"/>
<OptionState Id="ProofingTools_1042" State="absent" Children="force"/>
<OptionState Id="ProofingTools_1043" State="absent" Children="force"/>
<OptionState Id="ProofingTools_1044" State="absent" Children="force"/>
<OptionState Id="ProofingTools_1045" State="absent" Children="force"/>
<OptionState Id="ProofingTools_1046" State="absent" Children="force"/>
<OptionState Id="ProofingTools_1047" State="absent" Children="force"/>
<OptionState Id="ProofingTools_1048" State="absent" Children="force"/>
<OptionState Id="ProofingTools_1049" State="absent" Children="force"/>
<OptionState Id="ProofingTools_1050" State="absent" Children="force"/>
<OptionState Id="ProofingTools_1051" State="absent" Children="force"/>
<OptionState Id="ProofingTools_1052" State="absent" Children="force"/>
<OptionState Id="ProofingTools_1053" State="absent" Children="force"/>
<OptionState Id="ProofingTools_1054" State="absent" Children="force"/>
<OptionState Id="ProofingTools_1055" State="absent" Children="force"/>
<OptionState Id="ProofingTools_1056" State="absent" Children="force"/>
<OptionState Id="ProofingTools_1057" State="absent" Children="force" />
<OptionState Id="ProofingTools_1058" State="absent" Children="force"/>
<OptionState Id="ProofingTools_1060" State="absent" Children="force"/>
<OptionState Id="ProofingTools_1061" State="absent" Children="force"/>
<OptionState Id="ProofingTools_1062" State="absent" Children="force"/>
<OptionState Id="ProofingTools_1063" State="absent" Children="force"/>
<OptionState Id="ProofingTools_1065" State="absent" Children="force"/>
<OptionState Id="ProofingTools_1066" State="absent" Children="force"/>
<OptionState Id="ProofingTools_1067" State="absent" Children="force"/>
<OptionState Id="ProofingTools_1068" State="absent" Children="force"/>
<OptionState Id="ProofingTools_1069" State="absent" Children="force"/>
<OptionState Id="ProofingTools_1071" State="absent" Children="force"/>
<OptionState Id="ProofingTools_1074" State="absent" Children="force"/>
<OptionState Id="ProofingTools_1076" State="absent" Children="force"/>
<OptionState Id="ProofingTools_1077" State="absent" Children="force"/>
<OptionState Id="ProofingTools_1078" State="absent" Children="force"/>
<OptionState Id="ProofingTools_1079" State="absent" Children="force"/>
<OptionState Id="ProofingTools_1081" State="absent" Children="force"/>
<OptionState Id="ProofingTools_1082" State="absent" Children="force"/>
<OptionState Id="ProofingTools_1087" State="absent" Children="force"/>
<OptionState Id="ProofingTools_1088" State="absent" Children="force"/>
<OptionState Id="ProofingTools_1089" State="absent" Children="force"/>
<OptionState Id="ProofingTools_1091" State="absent" Children="force"/>
<OptionState Id="ProofingTools_1092" State="absent" Children="force"/>
<OptionState Id="ProofingTools_1093" State="absent" Children="force"/>
<OptionState Id="ProofingTools_1094" State="absent" Children="force"/>
<OptionState Id="ProofingTools_1095" State="absent" Children="force"/>
<OptionState Id="ProofingTools_1096" State="absent" Children="force"/>
<OptionState Id="ProofingTools_1097" State="absent" Children="force"/>
<OptionState Id="ProofingTools_1098" State="absent" Children="force"/>
<OptionState Id="ProofingTools_1099" State="absent" Children="force"/>
<OptionState Id="ProofingTools_1100" State="absent" Children="force"/>
<OptionState Id="ProofingTools_1101" State="absent" Children="force"/>
<OptionState Id="ProofingTools_1102" State="absent" Children="force"/>
<OptionState Id="ProofingTools_1106" State="absent" Children="force"/>
<OptionState Id="ProofingTools_1110" State="absent" Children="force"/>
<OptionState Id="ProofingTools_1111" State="absent" Children="force"/>
<OptionState Id="ProofingTools_1115" State="absent" Children="force"/>
<OptionState Id="ProofingTools_1121" State="absent" Children="force"/>
<OptionState Id="ProofingTools_1123" State="absent" Children="force"/>
<OptionState Id="ProofingTools_1128" State="absent" Children="force"/>
<OptionState Id="ProofingTools_1130" State="absent" Children="force"/>
<OptionState Id="ProofingTools_1132" State="absent" Children="force"/>
<OptionState Id="ProofingTools_1134" State="absent" Children="force"/>
<OptionState Id="ProofingTools_1136" State="absent" Children="force"/>
<OptionState Id="ProofingTools_1153" State="absent" Children="force"/>
<OptionState Id="ProofingTools_1159" State="absent" Children="force"/>
<OptionState Id="ProofingTools_1160" State="absent" Children="force"/>
<OptionState Id="ProofingTools_1169" State="absent" Children="force"/>
<OptionState Id="ProofingTools_2052" State="absent" Children="force"/>
<OptionState Id="ProofingTools_2068" State="absent" Children="force"/>
<OptionState Id="ProofingTools_2070" State="absent" Children="force"/>
<OptionState Id="ProofingTools_2074" State="absent" Children="force"/>
<OptionState Id="ProofingTools_2108" State="absent" Children="force"/>
<OptionState Id="ProofingTools_2117" State="absent" Children="force"/>
<OptionState Id="ProofingTools_3076" State="absent" Children="force"/>
<OptionState Id="ProofingTools_3082" State="absent" Children="force"/>
<OptionState Id="ProofingTools_3098" State="absent" Children="force"/>
<OptionState Id="ProofingTools_5146" State="absent" Children="force"/>

	
<!-- <Setting Id="SETUP_REBOOT" Value="IfNeeded" /> -->
<Setting Id="SETUP_REBOOT" Value="Never" />

<!-- <Command Path="%windir%\system32\msiexec.exe" Args="/i \\server\share\my.msi" QuietArg="/q" ChainPosition="after" Execute="install" /> -->

</Configuration>

Step 2 would be to copy the setup.exe, setup.dll, proofkit.ww, and proofmui.en-us to your ConfigMgr content store to a folder of its own. It’s important to copy proofmui-en.us too. I had missed that initially and was getting “The language of this installation package is not supported by your system” errors when deploying. After that you’d make a new application which will run a command like setup.exe /config \path\to\this\config.xml. I am not going into the details of that. These two blog posts are excellent references: this & this.

At this point I was confused again, though. Everything I read about the proofing kit made it sound like a one time deal – as in you install all the languages you want, and you are done. What I couldn’t understand was how would I go about adding/ removing languages incrementally? What I mean is say I modified this file to add Spanish and Portugese as languages, and I deploy the application again … since all machines already have the proofing kit package installed, and it’s product code is already present in the detection methods, wouldn’t the deployment silently ignore?

To see why this doesn’t make sense to me, here are the typical instructions (based on above blog posts):

  • Copy to content store
  • Modify config.xml with the languages you are interested in 
  • Create a new ConfigMgr application. While creating you go for the MSI method and point it to the proofkit.ww\proofkitww.msi file. This will fill the MSI detection code etc. in ConfigMgr. 
  • After that edit the application you created, modify the content location to remove the proofkit.ww part (because we are now going to run setup.exe from the folder above it), and modify the installation program in the Programs tab to be setup.exe /config proofkit.ww\config.xml.

NewImage

NewImage

Notice how the uninstall program and detection method both have the MSI code of the MSI we targeted initially. So what do I do if I modify the config.xml file later and want to re-deploy the application? Since it will detect the MSI code of the previous deployment it won’t run at all; all I can do is uninstall the previous installation first and then re-install – but that’s going to interrupt users, right? 

Speaking to my colleagues it seems the general approach is to include all languages you want upfront itself, then add some custom detection methods so you don’t depend on the MSI code above, and push out new languages if needed by creating new applications. I couldn’t find mention of something like this when I Googled (probably coz I wasn’t asking the right questions), so here goes what I did based on what I understood from others. 

As before, create the application so we are at the screenshot stage above. As it stands the application will install and will detect that it has installed correctly if it finds the MSI product code. What I need to do is add something extra to this so I can re-deploy the application and it will notice that inspite of the MSI being installed it needs to re-install. First I played around with adding a batch file as a second deployment type after the MSI deployment type, having it add a registry registry. Something like this:

@echo off
SET KEY=OfficeProofingKit2016
SET VER=1

reg add HKLM\Software\MyFirm /v %KEY% /t REG_SZ /d %VER%

This adds a key called OfficeProofingKit2016 with value 1. Whenever I change my languages I can update the version to kick a new install. I added this as a detection to the batch file detection type, and made the MSI deployment type a dependency of it. The idea being that when I change languages and update the batch file and detection method with a new version, it will trigger a re-run of the batch file which will in turn cause the MSI deployment type to be re-run. 

That turned out to be a dead end coz 1) I am not entirely clear how multiple deployment types work and 2) I don’t think whatever logic I had in my head was correct anyways. When the MSI deployment type re-runs wouldn’t it see the product is already installed and just silently continue?! I dunno. 

Fast forward. I took a bath, cleared my head, and started looking for ways in which I could just do both installation and tattooing in the same batch file. I didn’t want to go with batch files as they are outdated (plus there’s the thing with UNC paths etc). I didn’t want to do VBScript as that’s even more outdated :p and what I really should be doing is some PowerShell scripting to be really cool and do this like a pro. Which led me to the PowerShell App Deployment Toolkit (PSADT). Oh. My. God. Wow! What a thing. 

The website’s a bit sparse on documentation but that’s coz you got to do download the toolkit and look at the Word doc in there and examples. Plus a bit of Googling to get you started with what others are doing. But boy, is PSADT something! Once you download the PSADT zip file and extract its contents there’s a toolkit folder with the following:

NewImage

This folder is what you would copy over to the content store of whatever application you want to install. And into the “files” folder of this is where you’d copy all the application deployment stuff – the things you’d previously have copied into the content store.  You can install/ uninstall by invoking the Deploy-Application.ps1 file or you can simple run the Deploy-Application.exe file. 

NewImage

Notice I changed the deployment type to a script instead of MSI, as it previously was. The only program I have in that is the Deploy-Application.exe

NewImage

And I changed the detection method to be the registry key I am interested in with the value I want. 

NewImage

That’s all. Now for the fun stuff, which is in the Deploy-Application.ps1 file. 

At first glance that file looks complicated. That’s because there’s a lot of stuff in it, including comments and variables etc., but what we really need to concerting ourselves with is certain sections. That’s where you set some variables plus do things like install applications (via MSI or directly running an exe like I am doing here), do some post install stuff (which is what I wanted to do, the point for this whole exercise!), uninstall stuff etc. In fact, this is all I had to add to the file for my stuff:

[string]$appVendor = 'Microsoft'
[string]$appName = 'Office Proofing Kit 2016'
[string]$appVersion = '2016'
[string]$appArch = ''
[string]$appLang = 'EN'
[string]$appRevision = '01'
[string]$appScriptVersion = '1.0.1'
[string]$appScriptDate = '02/06/2019' # mm/dd/yyyy
[string]$appScriptAuthor = 'Rakhesh Sasidharan'
[string]$appRegKey = 'HKLM\SOFTWARE\MyFirm\Software'
[string]$appRegKeyName = 'OfficeProofingKit2016'
[string]$appRegKeyValue = '2' # !!when you change this version be sure to update the detection method!!

## <Perform Installation tasks here>
Execute-Process -Path "$dirFiles\setup.exe" -Parameters "/config proofkit.ww\config.xml"

## <Perform Post-Installation tasks here>
Set-RegistryKey -Key "$appRegKey" -Name "$appRegKeyName" -Value "$appRegKeyValue" -Type String -ContinueOnError:$True
Update-GroupPolicy
		
## Display a message at the end of the install
If (-not $useDefaultMsi) { Show-InstallationPrompt -Message 'New languages were successfully added to your Office 2016 installation. Please close and open Word, Outlook, etc. for the new languages to be enabled.' -ButtonRightText 'OK' -Icon Information -NoWait }

# <Perform Uninstallation tasks here>
Execute-MSI -Action Uninstall -Path '{90160000-00CC-0000-1000-0000000FF1CE}'

## <Perform Post-Uninstallation tasks here>
Remove-RegistryKey -Key "$appRegKey" -Name $appRegKeyName

That’s it! :) That takes care of running setup.exe with the config.xml file as an argument. Tattooing the registry. Informing users. And even undoing these changes when I want to uninstall.

I found the Word document that came with PSADT and this cheatsheet very handy to get me started.

Update: Forgot to mention. All the above steps only install the languages on user machines. To actually enable it you have to use GPOs. Additionally, if  you want to change keyboard layouts post-install that’s done via registry key. You can add it to PSADT deployment itself. The registry key is HKEY_CURRENT_USER\Keyboard Layout\Preload. Here’s a list of values.

Useful NPS & certificate stuff (for myself)

$
0
0

Came across an odd problem at work the other day involving NPS and Wireless APs. We have an internal wireless network that is set to authenticate against Microsoft NPS using certificates. The setup is quite similar to what is detailed here, with the addition of using an internal CA issued certificates for NPS to authenticate the users (as detailed here or here for instance). 

All wireless clients stopped being able to connect to the wireless. That’s when I realized the logs generated by NPS (at C:\Windows\System32\Logfiles) are horrendous. One option is to change the log format to “IAS (Legacy)” and “Daily” and use a script such as the one here to analyze. Side by side it is also worth changing the format to “DTS Compliant” as that produces a better readable XML output. All of this stuff is in the “Accounting” section BTW: 

NewImage

Pro Tip: If you go with the XML format and use Visual Studio code, you can prettify the XML as mentioned here

From the logs we could see entries like this:

    <Authentication-Type data_type="0">5</Authentication-Type>
    <Packet-Type data_type="0">3</Packet-Type>
    <Reason-Code data_type="0">259</Reason-Code>

In this case the packet type data of 3 means the access was rejected, and the reason code 259 means CRL check failed. (Nope, I don’t know these codes of the top of my head! My colleague who did the troubleshooting came across this. If you use the PowerShell script I mentioned above that converts some of the codes to readable values, but it too missed error 259). If you want to read more about the flow of traffic an why rejection might happen, this article is a good read. 

We didn’t really get to the bottom of this issue (it looks to be one of those random issues) but I spent some time reading up on certificates and NPS etc. so want to put that info here. Mainly, certutil. This tool can be used to check CRLs etc. I still haven’t gotten to the bottom of the above issue (why NPS couldn’t retrieve CRLs) but I picked up a bit of CRL stuff while troubleshooting so wanted to note that somewhere. 

The command certutil /crl (from an admin command prompt on the CA) causes it to publish the CRL. In my case it was via LDAP, and the command returned no errors. You can find the CRL URL from any certificate. In my case it was a long LDAP URL that looked something like this: ldap:///CN=blahblah,xxxxl?certificateRevocationList?base?objectClass=cRLDistributionPoint.You can use certutil /url with the URL to query it. You can also use ADSI Edit to view the configuration partition and go to the URL to see the last modified timestamp etc. 

The certutil command has many more useful switches (like in this blog post and this wiki entry – the latter has many more examples). For example you can export a certificate to a file and then run a command such as certutil /verify /urlfetch \path\to\certificate.cer. This will verify the certificate up the chain, and also check the CRL specified in the certificate. 

It is also possible to export a CRL from the CA: certutil /getcrl \path\to\file.crl. You can also view the exported CRL via a command like: certutil /dump \path\to\file.crl. Lastly you can import it to a different server via: certutil /addstore CA \path\to\file.crl

In our case we ended up exporting the CRL from the CA and importing to the NPS server to quickly workaround the issue. 

Later I learnt that there’s a reg key which can be used to disable CRL checking by NPS. Not that you want to do that permanently, but useful as a quick fix. Another thing I learnt is that there’s a reg key that controls how long the NPS server caches the TLS handle of authenticated computers. By default it is 10 hours, but can be extended. 

[Aside] Demystifying the Windows Firewall

$
0
0

Quick shoutout to this old (but not too old) video by Jessica Payne on the Windows Firewall. The stuff on IPSec was new to me. It’s amazing how you can skip targeting source IPs and simply use IPSec to target computers & users or groups of computers & users. 

[TIL] WMI filtering has separate precedence with GPOs

$
0
0

I knew that when it comes to a bunch of GPOs linked to an OU the one with the lowest number (highest in the list) has the highest priority.

What I learnt today is that if in this list you have a GPO that’s got a higher number (i.e. low priority and would typically be over-ridden by another one higher in the list) if it is scoped to a WMI filter then it is in a separate queue of its own and processed after all the other GPOs are processed. Thus, this GPO which would typically be over-ridden will apply above all others in the specific WMI filter scope it is applied to.

Some Windows firewall troubleshooting …

$
0
0

Obvious in retrospect, but today I picked up something new with Windows firewall.

I have a work laptop and I had been trying to RDP into from one of my home machines. Easier, you know, when I am not necessarily in front of the laptop but on some other machine and want to quickly RDP to check email or do some work. Thing is, try what I may I couldn’t ping or RDP this laptop. I could connect from the laptop to my home network, but the reverse seemed to be blocked.

Initially I suspected the VPN client might be doing something, but that didn’t make sense. Usually a VPN client blocks outgoing and that was working fine here. I took a look at the laptop firewall and that had RDP allowed on the public profile with no restrictions to source IP etc. The RDP services were running, my account was in the correct groups etc. I even added a catch all allow rule to the firewall to see if that’d make a difference – but nope! I couldn’t disable the firewall unfortunately as that was controlled via GPOs.

From the Windows Firewall console you can get the log files:

… and that showed me the RDP connections were definitely being dropped. This made no sense considering I had an allow rule.

Next I followed this helpful post to identify which rule was blocking it (I am reproducing the steps here just in case).

      • Open a Windows console (with Administration rights) to enter commands
      • Enable the audit for Windows Filtering Platform (WFP) by running the following commands:
        • auditpol /set /subcategory:"Filtering Platform Packet Drop" /success:enable /failure:enable
        • auditpol /set /subcategory:"Filtering Platform Connection" /success:enable /failure:enable
      • Reproduce the issue
      • Run command: netsh wfp show state (this creates a XML file in the current folder)
      • Open the event viewer: Run (Windows+R) > eventvwr.msc
      • Go to “Windows logs” > “Security”
      • In the list, identify the dropping packet log (hint: use the Search feature on the right menu, searching for items (source IP, destination port, etc.) specific to your issue)
      • In the log details, scroll down and note the filter ID used to block the packet
      • Open the generated XML file:
      • Aearch for the noted filterID, and check out the rule name (element “displayData > name” on the corresponding XML node)
      • When you’re done, don’t forget to turn off the audit:
        • auditpol /set /subcategory:"Filtering Platform Packet Drop" /success:disable /failure:disable
        • auditpol /set /subcategory:"Filtering Platform Connection" /success:disable /failure:disable

In my case I had the following:

<item>
    <filterKey>{437be0f0-cb8e-4cc4-b183-6085825d3120}</filterKey>
    <displayData>
        <name>Query User</name>
        <description>Prompt the User for a decision corresponding this Inbound Traffic</description>
    </displayData>
    <flags/>
    <providerKey>{decc16ca-3f33-4346-be1e-8fb4ae0f3d62}</providerKey>
    <providerData>
        <data>fb03000000000000</data>
        <asString>........</asString>
    </providerData>
    <layerKey>FWPM_LAYER_ALE_AUTH_RECV_ACCEPT_V4</layerKey>
    <subLayerKey>{b3cdd441-af90-41ba-a745-7c6008ff2301}</subLayerKey>
    <weight>
        <type>FWP_UINT8</type>
        <uint8>8</uint8>
    </weight>
    <filterCondition numItems="1">
        <item>
            <fieldKey>FWPM_CONDITION_ORIGINAL_PROFILE_ID</fieldKey>
            <matchType>FWP_MATCH_EQUAL</matchType>
            <conditionValue>
                <type>FWP_UINT32</type>
                <uint32>1</uint32>
            </conditionValue>
        </item>
    </filterCondition>
    <action>
        <type>FWP_ACTION_BLOCK</type>
        <filterType/>
    </action>
    <rawContext>0</rawContext>
    <reserved/>
    <filterId>71629</filterId>
    <effectiveWeight>
        <type>FWP_UINT64</type>
        <uint64>9223372036854791168</uint64>
    </effectiveWeight>
</item>

So it was being blocked by a rule called “Prompt the User for a decision corresponding this Inbound Traffic”? Made no sense.

Some more Googling on that brought me to this thread from which I realized that of course there’s a setting to merge or override the local firewall rules, and even though it appeared to be the case that the local rules were active (I could edit them, add new rules, disable etc.) they were in fact being ignored because the GPO settings were overriding them. Once I modified the GPO to allow local rules, I was then able to RDP to my laptop. 

Get-ADDomainController : Directory object not found

$
0
0

No, I don’t have a solution to the above. But I do have a workaround in case it affects any one else. :) 

ldifde -d "OU=Domain Controllers,DC=contoso,DC=com" -f c:\output.txt -l "sAMAccountName, operatingSystem" -r "(&(objectClass=computer))"

Of course replace “contoso” and “com” with your domain specific names. 

Update: It could be related to Riverbeds or any other WAN Accelerators you may have. Check this thread.

Thanks to a colleague, a PowerShell based workaround is the following:

(Get-ADDomain -Identity 'contoso.com').ReplicaDirectoryServers | ForEach { Get-ADDomainController -Identity $_ -Server 'contoso.com' }

This gets a list of the read-write domain controllers and runs the Get-ADDomainController cmdlet against each of them. 


How to check LDAPS certificate and TLS version

$
0
0

Get OpenSSL (a list of 3rd party sites here; I went with this one). The connect to your DC thus:

openssl s_client -connect <Domain_Controller>:636

To test a specific version add a switch like -tls1_2 or -tls1_1. If it fails you get an error like this (this was me asking for TLS1.1):

CONNECTED(000002F4)
51720:error:1409442E:SSL routines:ssl3_read_bytes:tlsv1 alert protocol version:.\ssl\s3_pkt.c:1498:SSL alert number 70
51720:error:1409E0E5:SSL routines:ssl3_write_bytes:ssl handshake failure:.\ssl\s3_pkt.c:659:
---
no peer certificate available
---
No client certificate CA names sent
---
SSL handshake has read 7 bytes and written 0 bytes
---
New, (NONE), Cipher is (NONE)
Secure Renegotiation IS NOT supported
Compression: NONE
Expansion: NONE
No ALPN negotiated
SSL-Session:
    Protocol  : TLSv1
    Cipher    : 0000
    Session-ID:
    Session-ID-ctx:
    Master-Key:
    Key-Arg   : None
    PSK identity: None
    PSK identity hint: None
    SRP username: None
    Start Time: 1603452101
    Timeout   : 7200 (sec)
    Verify return code: 0 (ok)
---

Hope that helps someone!

Update: Just for completeness, here are the regkeys to enable/ disable various TLS versions in Windows.

Notes on PSADT

$
0
0

Been a while since I worked with PSADT so here’s a quick reminder to myself. PSADT is a god-send for anyone deploying applications via SCCM.

To install just run the script:

Deploy-Application.ps1

# if you want to be explicit (I like to be)
Deploy-Application.ps1 -DeploymentType 'Install'

# or even
Deploy-Application.ps1 'Install'

# silent variant
Deploy-Application.ps1 -DeployMode 'Silent'

To uninstall:

Deploy-Application.ps1 -DeploymentType 'Uninstall'

# or even
Deploy-Application.ps1 'Uninstall'

# silent variant
Deploy-Application.ps1 -DeploymentType 'Uninstall' -DeployMode 'Silent'

You can deploy in the following -DeployModeInteractive (the default) or Silent or Noninteractive (very silent?).

When calling via SCCM you can use the Deploy-Application.exe helper instead. This will launch the script via PowerShell. Thus do the following for example:

Deploy-Application.exe -DeploymentType "Install" -DeployMode "Silent"

As opposed to:

powershell.exe -Command "& { & '.\Deploy-Application.ps1' -DeployMode 'Silent'; Exit $LastExitCode }"

If you decide to rename Deploy-Application.ps1 or have another copy in the same folder you could run it thus:

Deploy-Application.exe 'Custom-Script.ps1'

# Or a longer format
Deploy-Application.exe -Command 'C:\Testing\Custom-Script.ps1' -DeploymentType
'Uninstall'

By default if there’s a reboot requirement from the MSI this is passed on to the calling process. You can suppress it:

-AllowRebootPassThru $false

Logs are stored at C:\Windows\Logs\Software.

The only file you need to modify with PSADT is the Deploy-Application.ps1 file. Apart from that there are Files and SupportFiles folders – use the former for storing any MSI and setup files, use the latter for any other files you want to copy over to the target machine. Easy peasy. All other files can be ignored, but you can tweak them if you want to customize PSADT further such as add a logo and change colours etc. I haven’t customized them ever but I know my colleagues tend to do.

There’s a Zero Config MSI install method. This means you don’t modify the Deploy-Application.ps` file at all. You put the MSI in the Files folder. (Note: you can have only one MSI, so if more than one is put there only the first is used). You put the corresponding MST file (if any) in the Files folder. It must have the same name as the MSI, but with an MST extension of course. Finally you put any patches (MSP files) in the same Files folder – there can be more than one of these and they are installed in alphabetical order (so rename the files if order matters). All you do then is run the Deploy-Application.ps1 script.

The beauty of PSADT is that it gives you a ton of variables and helper functions you can use in the script. The best reference for all theses is their PDF guide. Pages 36-41 (as of this writing) have a list of all the variables while Page 41 onwards has a list of function.

You can determine if you are on a server for instance via the $IsServerOS variable, refer a user’s Desktop via $envUserDesktop or the profile itself via $envUserProfile, even things like the public desktop via $envCommonDesktop. These are just some examples, and nothing you can’t do in a longer way via PowerShell, but using PSADT gives you all these for free (which is always the point of a framework or language – give you things for free so you can focus on the bigger stuff). You can refer to the Files folder via $dirFiles and SupportFiles via $dirSupportFiles. For example the following uses a function provided by PSADT to copy over some files to C:\Windows:

Copy-File -Path "$dirSupportFiles\MyApp.ini" -Destination "$envWinDir\MyApp.ini"

An especially useful function is the Execute-MSI one. Here’s me using it to install Teams:

Execute-MSI -Action 'Install' -Path "$dirFiles\Teams_windows_x64.msi" -Parameters '/qn /NORESTART ALLUSER=1 ALLUSERS=1 OPTIONS="noAutoStart=true"'

Notice I can specify:

  • an action (-Action 'Install', -Action 'Uninstall', Action 'Repair', -Action 'Patch', -Action 'ActiveSetup'),
  • parameters to the msixec process (the -Parameters switch overrides the defaults, but we can do -AddParameters to add instead or -SecureParameters to not show the paramters in any logs),
  • transforms (not shown above but use the -Transform switch),
  • any logging options via -LoggingOptions (default is /l*v to the default logging path)

I find this file to be a useful reference of all the functions. This is what gets pulled in to define the functions and variables. Some useful functions are the Get-UserProfiles which lists all the userprofiles on the machine (and the Profilepath property can be used to get the path of the profile) or Invoke-HKCURegistrySettingsForAllUsers which can do a registry operation against all the HKCU registry keys. Here’s me using the latter to delete certain Teams related keys:

# a block of registry cmdlets (provided by PSADT) that I want to run
# note these target HKCU. And I have to add the -SID part for user later on
[scriptblock]$HKCURegistryChanges = {
	Remove-RegistryKey -Key 'HKCU\Software\Microsoft\Office\Teams' -SID $UserProfile.SID		
	Remove-RegistryKey -Key 'HKCU\Software\Microsoft\Windows\CurrentVersion\Uninstall\Teams' -SID $UserProfile.SID
}

# I run the above script block against the HKCU for all users
Invoke-HKCURegistrySettingsForAllUsers -RegistrySettings $HKCURegistryChanges

I’ve put the Teams PSADT script I made (from which I showed snippets above) in this GitHub repo.

Useful links:

  • File with all the functions (you can search through it)
  • PDF guide
  • Not sure if this is an official site (doesn’t seem to be) but it has a reference of all the functions (it seems to be a website version of the PDF)

More Notes on Teams

$
0
0

Quick shoutout to this excellent blog post by James Rankin on installing Teams (aptly titled installing the damned thing).

A few weeks ago I had blogged about Teams and I thought I had it under control. Since then however, I realized whenever users login to my Citrix session servers Teams always launches even though I had OPTIONS="noAutoStart=true" set as part of the installer. Moreover, I also had the GPO from Microsoft set – Prevent Microsoft Teams from starting automatically after installation – but looks like it was being ignored?

Googling on this there’s a whole load of conflicting info. Apparently the GPO only kicks in if you applied it before a user has logged in. Once the user has logged in the GPO doesn’t matter (seriously, wtf?!) Never mind, there are workarounds like modifying the desktop-config.json in each user’s AppData Roaming folder via a startup PowerShell script (hah! nuts) and manipulate a setting there (example scripts here, here, and here). Even though I hate the idea I tried it… but that too doesn’t work. I quit Teams, make the change, logout and login and bam! Teams is back.

Why doesn’t Teams just use a registry key or be controllable via a GPO? I mean you can control some aspects like disabling fallback mode in VDI via an HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Teams\DisableFallback or HKEY_CURRENT_USER\SOFTWARE\Microsoft\Office\Teams\DisableFallback registry key, so it’s not too much of an ask. Modifying settings via startup scripts that tweak a JSON file is so 1990s (and yet 2020s coz we use PowerShell). Weird!

I spent a couple of days thinking if it was Citrix or something wrong in my environment. Then I remembered there’s an HKLM key that Teams sets as part of its install. The Teams installers creates HKLM\Software\WOW6432Node\Microsoft\Windows\CurrentVersion\Run which is what injects Teams into each user’s profile and sets up an entry under their HKCU to update and launch Teams subsequently.

According to the Citrix docs these can be in three places:

  • HKEY_LOCAL_MACHINE\SOFTWARE\WOW6432Node\Microsoft\Windows\CurrentVersion\Run
  • HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Run
  • HKEY_CURRENT_USER\SOFTWARE\Microsoft\Windows\CurrentVersion\Run

In my case it was the first one and so I deleted that to stop it was auto launching at startup.

While Googling on that registry key though to see if anyone else has encountered this (coz at this point I was feeling pretty stupid since none of the “standard” ways seemed to work for me, and deleting the registry key felt like a hack) and that’s when I came across James’s funny post. That gave me a chuckle!

From there I learnt the following caveat to my approach though:

So can we just delete the HKLM entry and have done with it that way?

Well, yes, but with a significant caveat. The problem is that once auto-setup has been run, on subsequent logons, the auto-launch (so not the first-time setup, but the automatic launch of Teams for a user who has already done the auto-setup step, which we will refer to as auto-launch from hereon in) is also driven by the HKLM Registry value. So if you delete the HKLM Registry value so that users trigger the auto-setup step themselves, then the users will have to remember to manually launch Teams every time they log in. This isn’t ideal – apps like Teams are good to get auto-launched, because otherwise users may miss important conversations and messages. And don’t forget – there is no option in the user interface to auto-start Teams with the Machine-Wide Installer. The “user” version of Teams (the one that auto-updates, which we don’t want to use) has the option to auto-start in the GUI…

Bugger. The last point is even more irritating with the Machine-Wide Installer – as he said, there’s no way users can enable Teams to auto-start. Really, Teams is one of the worstly designed products for admin deployment and management!

On a side note I learnt from James’s blog post that in his experience too the GPO and those desktop-config.json settings did zilch! So at least I wasn’t being stupid.

Continuing with James’s post here’s what he felt one requires of Teams:

Now, our requirements appear to be thus – we want to disable the auto-setup, so that new users don’t all launch Teams at once and hammer the system resources, but once they have launched Teams for the first time, we want the program to auto-launch so they don’t have to run it manually.

… as well as getting rid of the auto-setup but keeping the auto-launch, we also want to force Teams to actually honour the setting for openAsHidden from the GUI.

You should obviously go read his much better blog post than mine, but here’s what we need to do more as a reference to myself.

Step 1: Delete the HKLM registry key as I mentioned above. Either make it a part of your installation (my PSADT script does this) or use Group Policy Preferences:

(The first entry is to stop Teams from failing back to the Citrix server if the user is connected from a non Teams optimized Citrix endpoint).

Step 2: When Teams opens let it do so hidden. This is actually tweaked by the desktop-config.json file(!!) so a PowerShell startup script is in order.

Here’s the one I am going to use, but there’s loads more if you Google for it. Need to push this out as a user logon script.

Step 3: We want Teams to launch if a user has already launched it once. This is important – by deleting the HKLM key above we are ensuring it doesn’t just launch for everyone, but once you have launched it manually then we want it to always start (and be hidden – that’s set by the desktop-config.json file above, but Teams ignores that so we need to cater for that too – aaargh!).

Here’s what James does for that: if a user has launched Teams then the desktop-config.json file would be present. So he checks for that file and if it exists adds an HKCU\Software\Microsoft\Windows\CurrentVersion\Run entry to launch Teams. This points to the following:

"%ProgramFiles(x86)%\Microsoft\Teams\current\Teams.exe" --process-start-args "--system-initiated"

This is same as what the HKLM entry used to point to, but with an additional switch which is required to make it honour the desktop-config.json file.

[ Update (the next day): This doesn’t work! Teams seems to still ignore desktop-config.json. 🤷🏼‍♂️I hate this product! As of now I’ve decided to just disable auto-launch and users can launch Teams manually. No one launches Outlook or Word automatically anyways, so I am going to deal with Teams similarly. So step 2 does not work, and I’ve decided not to do step 3 below.]

Next, he goes ahead and deletes the same HKCU\Software\Microsoft\Windows\CurrentVersion\Run entry if this desktop-config.json file is not present. This is just a continuation of the two HKLM entries I deleted in step 1 and he’s being complete I think by also taking care of the HKCU case (Citrix too points to this, so its a good thing he’s doing it). If the desktop-config.json file is not present it means the user hasn’t run Teams yet so don’t bother launching it.

XML version in case it helps copy-paste:

<?xml version="1.0"?>
<Registry clsid="{9CD4B2F4-923D-47f5-A062-E897DD1DAD50}" name="Teams" status="Teams" image="6" userContext="1" bypassErrors="1" changed="2021-01-26 17:24:02" uid="{36B7BAAE-DE63-4CD7-A45B-3F6FE494323E}">
  <Properties action="R" displayDecimal="0" default="0" hive="HKEY_CURRENT_USER" key="Software\Microsoft\Windows\CurrentVersion\Run" name="Teams" type="REG_SZ" value="&quot;%ProgramFiles(x86)%\Microsoft\Teams\current\Teams.exe&quot; --process-start-args &quot;--system-initiated&quot;"/>
  <Filters>
    <FilterFile bool="AND" not="0" path="%APPDATA%\Microsoft\Teams\desktop-config.json" type="EXISTS" folder="0"/>
  </Filters>
</Registry>

<?xml version="1.0"?>
<Registry clsid="{9CD4B2F4-923D-47f5-A062-E897DD1DAD50}" name="Teams" status="Teams" image="8" userContext="1" bypassErrors="1" changed="2021-01-26 17:20:59" uid="{FC683B87-E628-4BDB-B819-14D48AB81E5B}">
  <Properties action="D" displayDecimal="0" default="0" hive="HKEY_CURRENT_USER" key="Software\Microsoft\Windows\CurrentVersion\Run" name="Teams" type="REG_SZ" value=""/>
  <Filters>
    <FilterFile bool="AND" not="1" path="%APPDATA%\Microsoft\Teams\desktop-config.json" type="EXISTS" folder="0"/>
  </Filters>
</Registry>

So that’s where I am at today. Will update this post with more info if there’s any.

New-ADUser – A referral was returned from the server

$
0
0

This stupid error message stumped me for a bit yesterday.

Microsoft.ActiveDirectory.Management.ADReferralException: A referral was returned from the server at Microsoft.ActiveDirectory.Management.ADWebServiceStoreAccess.CheckAndThrowReferralException(ADResponse response) at Microsoft.ActiveDirectory.Management.ADWebServiceStoreAccess.Microsoft.ActiveDirectory.Management.IADSyncOperations.Add(ADSessionHandle handle, ADAddRequest request) at Microsoft.ActiveDirectory.Management.ADActiveObject.Create() at Microsoft.ActiveDirectory.Management.Commands.ADNewCmdletBase3.ADNewCmdletBaseProcessCSRoutine() at Microsoft.ActiveDirectory.Management.CmdletSubroutinePipeline.Invoke() at Microsoft.ActiveDirectory.Management.Commands.ADCmdletBase1.ProcessRecord()

It was generated by a new account creation Flow I maintain, and the error was from the New-ADUser cmdlet. There’s a bunch of posts on the Internet on this for the Set-ADUser cmdlet, but none for New-ADUser.

Upon a whim I ran the nltest /dsgetsitedc:<domain> command on the machine where New-ADUser was being run from and noticed the result was a RODC. So I did nltest /screset:<domain> which gave me a regular DC. After that New-ADUser started working fine as expected. I guess the referral it was talking about was from the RODC to a regular DC and something about that didn’t gel well with New-ADUser.

It’s been years since I ran any of the nltest commands! Am pleased I actually remembered it and thought to run the command. Past few years have been all Microsoft 365 and Power Platform, I’ve forgotten stuff from my younger days. :)

Azure AD connect sync via Remote PowerShell

$
0
0

I wanted to initiate a remote sync of Azure AD connect via Remote PowerShell. The cmdlet is simple – Start-ADSyncSyncCycle -PolicyType Delta – but by default you can’t remove PowerShell unless you are an admin, and I didn’t want to open up admin access to a service account. Moreover I wanted to limit what the service account can do.

The solution for this is simple, and something I found via Google.

Step 1 – Create your service account

Step 2- Create a session configuration file on your Azure AD server.

For this, open an admin PowerShell window. And type the following:

New-PSSessionConfigurationFile `
  -ModulesToImport "C:\Program Files\Microsoft Azure AD Sync\Bin\ADSync" `
  -VisibleCmdLets ('Start-ADSyncSyncCycle') `
  -LanguageMode 'NoLanguage' `
  -SessionType 'RestrictedRemoteServer' `
  -Path 'c:\PSSessionConfigurationFile\limited-aad-sync.pssc'

This includes the AAD Sync module, and limits the visible cmdlets to a single one. The file is stored in the path given.

Step 3 – Register this session

In the same PowerShell window do:

Register-PSSessionConfiguration `
  -Name 'Limited AAD Sync' -ShowSecurityDescriptorUI `
  -Path 'c:\PSSessionConfigurationFile\limited-aad-sync.pssc'

This opens up a dialog box, wherein you can search and select the service account previously created. Give this Full Control rights. This is what allows the service account to connect using this session configuration. You could select a group too, but I prefer usernames so no one else can make changes unless they are on the Azure AD connect server.

And that’s it really!

From a client side if I were to now try and connect it would fail with an access denied message:

New-PSSession -ComputerName $server -Credential $creds

That’s because the service account isn’t a local admin. Try with the session configuration created above instead:

New-PSSession -ComputerName $server -Credential $creds -ConfigurationName "Limited AAD Sync"

This works! If I were to store the session in a variable, I can now run the sync cmdlet:

Invoke-Command -Session $session -ScriptBlock { Start-ADSyncSyncCycle -PolicyType delta }

Try any other cmdlet, and it will error out.

The above cmdlet too needs the service account to be in the “ADSyncOperators” groups on the Azure AD server. Else it will succeed but give the following error: Start-ADSyncSyncCycle: Retrieving the COM class factory for remote component with CLSID {835BEE60-8731-4159-8BFF-941301D76D05} from machine XXXX failed due to the following error: 80070005 XXXX.

That’s it! Easy peasy. Thanks to this Petri article for pointing me the right way.

Viewing all 163 articles
Browse latest View live