If you follow my blog or read some of my content on mailbox migrations, you’ll know I spend a good amount of time working in the M&A area. As such I thought it would be helpful to others to document some of the more tricky scenarios and provide walk thru’s to get an overall idea what you’re going to be dealing with when doing these migrations. In the course of doing all this, I stumbled upon a security flaw within Microsoft’s Exchange Online product. I thought it would be interesting to document my experience and how I went about reporting it to Microsoft and the outcome.
When I was testing the Microsoft Exchange Tenant to Tenant migrations, I discovered something that wasn’t quite right and then realized I had discovered a security problem, in my case a “Security Feature Bypass”. In my earlier blog, https://robsteuer.wordpress.com/2021/04/29/m365-exchange-online-cross-tenant-mailbox-migrations/ I used the new-moverequest cmdlet to move the mailboxes because I was moving individual mailboxes. These move requests kicked off without any issues. However for a real production migration, you would likely be using the New-MigrationBatch cmdlet to migrate a bunch of users in batch jobs. This is where I discovered the security problem!
When you set up the Tenant to Tenant migration, part of the process is that the New-OrganizationRelationship gets executed and one of the parameters is the “MailboxMovePublishedScopes”. This parameter is the name of a mail enabled security group that contains the list of users who the source tenant admins allow to be migrated. If the users are not in this group, they cannot be migrated. Well, that’s how it was supposed to work...
If you use the New-MigrationBatch cmdlet and the users to be migrated weren’t listed in the security group by the source tenant admins, the target tenant admin would get an error stating that this move wasn’t allowed. HOWEVER, if you instead ran it using the New-MoveRequest cmdlet, the mailbox move would take place! It seems the New-MoveRequest cmdlet did not abide by the “SourceMailboxMovePublishedScopes” parameter! Therefore the target tenant administrator had full control to move whatever mailboxes they wanted and were able to bypass this security feature simply by using the New-MoveRequest cmdlet.
The first step is creating an account with MSRC and then submitting your findings. I did this on October 12, 2021. It creates a case where you can track the progress and looks like this:
The next day, something or someone picked up my ticket and marked my case for ReviewRepro. This is where they check to see if what I’m reporting is reproducible and they see the issue as well. On October 25th, the status was changed from ReviewRepro to Develop. Once this happened, I was 100% sure I had found and was the first to report this security issue to Microsoft. This was pretty exciting and you start wondering about what is happening behind the scenes, etc. However, you need to just calm down because the next leg of the journey takes the longest. It took from October 25th until January 19th for Microsoft to finish development and apply the fix to Exchange Online.
While I was excited about the prospect of a bounty, I didn’t expect too much as it wasn’t that big an issue in my opinion. Unfortunately, no bounty was awarded, not even $50 to take my wife out to dinner after making her listen incessantly about this security issue I found. I even asked if there was a t-shirt or stickers but nope, nothing. This was disappointing. All I have to show is the credit mentioned here:
So will I be continuing down the path as a security researcher? I’m not sure yet. I’ve started to look into it more because I wasn’t really aware of this niche area of IT or the fact that depending on what you find you can get paid some serious money. It sure makes an interesting side gig and provides some interesting stories to tell. It could even be a fun retirement activity! 🙂
This blog post isn’t exactly about about fixing something, well maybe it is, but in a different kind of way. It’s a blog post I wish I could have read a few years ago. It’s focused on fixing one’s understanding of cryptocurrencies and the monetary system. I won’t be addressing the technical aspects of blockchain or cryptography. This isn’t to make you pro-crypto or anti-crypto, but rather to allow yourself the proper time to really understand the concepts, technology and what it all really means. Also, none of what I’m posting should be construed as financial advice, I’m only offering this as educational content. Some of the links below are referral links for which I might receive a small commission which helps me produce content and offset the time I invest to put these posts together. I don’t use affiliate links for any services I don’t personally believe in or wouldn’t recommend to a friend. That said, lets continue on…
From my own experience and those of others I’ve talked to or read about, I’ve found that most people don’t find out about crypto and hop on board 100%. It takes a few “touches”, at first your hear about it and laugh, how ridiculous, imaginary Internet money, LOL! Time goes by, it pops into your life again, maybe you start to wonder about it some more, spend a little time at it, but then something more important takes precedence and it’s forgotten about again. At some future point though, you realize you need to dig into this cryptocurrency thing a bit deeper and try to see what this is all about. Over the past year, I took a deep dive into learning more about Bitcoin and crypto in general. I’m hoping that those coming to learn about Exchange, IT, or fixing things stumble upon this post and learn something. During my deep dive, I’ve learned a lot and even discovered some things that others (who have been in this space earlier than myself) were not aware of yet. So this should serve as a good starter for those new to the topic, and maybe offer some hidden gems that some seasoned veterans weren’t knowledgeable about.
I thought a lot about how to write this post and I decided against explaining the same things you can find on lots of other sites. I’m not going to write a lot of background or detailed information on the topic or try to push my ideas or opinions. Instead, I will write a short bit, but then try to direct your learning and include lots of links for you to learn more on your own. In this way I can offer you the most benefit by guiding you to the sources I found the most influential to help expedite your learning. So, if you decide you want to take the “red pill”, keep reading, if not that’s OK too, maybe you’ll find your way back here later.
Brief History
Bitcoin was invented in 2008 by a mysterious Satoshi Nakamoto whom no one really knows the true identity thereof. It might have been a single person or a group of individuals. You should read the Bitcoin whitepaper (if you don’t understand all of it, that’s OK): https://bitcoin.org/bitcoin.pdf. Bitcoin was not the first attempt at a digital cash system, but it was unique in that it was the first decentralized one to solve the double spend problem. Funny enough, this is kind of email related which is close to my heart, because an early system proposed to prevent spam HashCash, is used in Bitcoin as the proof of work for the mining aspect: Bitcoin utilizes public / private key cryptography where the public key is used to receive funds and is OK to be shared with others, while the private key should only be known to you and is used to spend your funds. The key here is that if someone has access to your private key, they control your money.
Bitcoin has been around now for well over 10 years and mostly everyone has heard of it by now. People have also heard stories about those who got rich buying Bitcoin early on, but most people I talk to don’t have a really good knowledge of what Bitcoin or cryptocurrency is or why it’s useful. Prior to my deep dive, I used to be one of those people and as a U.S. citizen, I didn’t think much of Bitcoin originally when I heard about it back in 2011 or 2012. It seemed shady, associated with criminal activity, and why would I even need this when I have cash and credit cards, so I didn’t give it too much thought. The very volatile price of Bitcoin over the years also led to feelings that it was a pump and dump type of investment scam. However, as I started to better understand and learn, my feelings and understanding on the topic evolved.
Why Bitcoin?
There are many reasons that Bitcoin and some (not all) cryptocurrencies make sense. Outside of the U.S. and developed countries, there are many in the world who are underbanked and don’t have access to bank accounts. Bitcoin allows anyone to store their money in a decentralized way without any central intermediary such as a banking institution. There are pro’s and con’s to this approach, the pro is that you are the bank, the con is that you are the bank. 🙂 Let me explain, as you are the bank, you don’t need a third party to authorize you and give you permission to store your money with them, or to send or receive the money from someone else. However, as you are the bank, you are also 100% responsible for your money, so you are “bank security” as well.
Because crypto is decentralized, it is not tied to any government, and therefore not influenced by politics. It is a worldwide currency, and has the properties of money (scarcity, divisibility, store of value, etc.) It has played an important role for citizens in governments that were experiencing issues with their own money like Cyprus and Afghanistan. It is also a form of money that can’t be censored where an intermediary can define an acceptable use policy and block or withhold funds if they deem fit. It also makes it very easy to send simple payments or great amounts of wealth across the globe in minutes.
The U.S. and most developed countries operate under a central banking system that can print money which is not backed by any hard assets like gold or silver. This type of money is called fiat. It didn’t always used to be this way, under a gold standard, a country couldn’t just print more money. This limited the countries spending and kept a sound monetary system in place. The U.S. left the gold standard in 1971. Basically, the U.S. government (and others that aren’t backed by a hard asset like gold) can print as much money as they want, which isn’t a good thing as it decreases the purchasing power of the money. Here’s a graph showing the currency in circulation, aka “U.S. money printing“, and you can see how it’s looking more and more like an exponential curve. If a country has a lot of debt, they can simply print more money to pay that debt. Pretty convenient, but there are consequences.
What does this have to do with Bitcoin and Cryptocurrencies?
Unlike fiat money, Bitcoin has a hard limit of 21 million bitcoins. Of these 21 million, a few million have been lost forever, which decreases the supply. Further, you can’t produce more than the 21 million bitcoins. Well, technically anything is possible, you can modify the code, but it won’t happen. I would have to go into a lot of detail here about getting consensus between all the nodes running Bitcoin, so instead I’m going to refer you to the first of many resources, take the time and order this book and read it, The Bitcoin Standard. This book doesn’t start explaining Bitcoin until around chapter 8, but the lead up to it is necessary so you can really grasp why Bitcoin is something to consider as an alternative asset in your portfolio. The short of it though, is that since there is a hard limit of 21 million bitcoins, unlike a government, you can’t just simply “print more money”. Therefore, as demand increases to hold Bitcoin and it is a scare resource, the price only has one way to go, UP. Government can not manipulate the currency, therefore as it’s a fixed supply, the purchasing power of Bitcoin can only be expected to strengthen over time. The price volatility of Bitcoin right now might concern some, but this is an emerging asset class and volatility over time should moderate.
How can I learn more?
As I mentioned earlier, it wasn’t my intention to produce a lengthy 100 page blog post on cryptocurrencies and money in general. I would also be doing you a disservice by trying to condense all the knowledge I gained over the course of a year into a blog post. Instead, I’m providing the curated set of material below that I found useful, so you can learn more:
Books:
The Bitcoin Standard (A must read, however I don’t agree 100% with everything in the book)
The Global Addiction to QE (Does not discuss Bitcoin, but great to really understand how the current money system works, what is Quantitative Easing, and why it’s a problem you need to learn about).
Videos:
Andreas Antonopoulos – Introduction to Bitcoin I discovered this video a week after I originally posted this blog article, and it’s a must view. If you do nothing else, watch this…
MIT has done an excellent job with their MIT Open Courseware and made available to the public for free, Blockchain and Money. This isn’t a 30 minute YouTube video, but a full class, over 23 hours of content on this topic. Hint: If you’re not awake yet, MIT put this course together in 2018, maybe it’s time to find out more, it’s still early….
Plot11 does a great job putting together Crypto infotainment videos like this one: The Rise and Rise of Bitcoin. They have a lot of other great ones too, so check them out!
Meetup.com – There are lots of in person or online live events. Search your area for “Bitcoin” or “Crypto”. In general, large cities have some type of presence of individuals meeting to discuss crypto, DeFi, blockchain, etc.
I’m informed now, so how do I get some crypto?
Basically, you can either purchase or mine crypto. If you are a beginner, I would recommend creating an account at an Exchange such as Coinbase where you can purchase Bitcoin. Many like Coinbase offer some deals where if you purchase $100 of crypto, they will give you $10 in free bitcoin. Also, another reason to open a Coinbase account is because they offer free “Learn and Earn” opportunities, where you watch a video and earn free crypto. Over the past year, I’ve earned over $80 (at time of viewing) which has increased as the price of crypto has increased.
Exchanges where you can purchase Bitcoin and other crypto
Gemini – Create an account, buy or sell $100 of crypto and get $10 free bitcoin
Coinbase – Create an account, buy or sell $100 of crypto and get $10 free bitcoin
Voyager – Create an account, buy or sell $100 of crypto and get $25 free bitcoin
PRO-TIP: After your initial purchases via an Exchange to get started, you’ll want to use their advanced trading platform so you can save on fees. For example, while it’s easy to purchase crypto on Coinbase or Gemini, you’ll pay a bit more in the transaction fee to do so. If you use their advanced or pro platform, the transaction fee will be much less, but the experience is more complicated.
As far as mining Bitcoin, trying to do so will be tough, so I’m not going to get into it too much because the equipment to do it successfully is expensive (if you can even find it) and the cost of electricity might not make it worth it. Maybe I’ll go over this in a separate blog post, but as this is an intro, I’m not going to address it further for now.
Taxes
Yes, the government wants their share of your profits. ¯\_(ツ)_/¯ This is where I maybe start to offer the information that will REALLY help you. Unlike your stock brokerage account where they can see what you buy and sell and prepare for you a nice tax form at the end of the year, crypto doesn’t work like that. For a few reasons, although mainly because you can and should move your crypto off of the exchange and hold it. The saying is, “Not your keys, not your crypto”. Because you can move you crypto without selling it, an exchange that you purchase it from really can’t know when you sold it or not, so you need to keep track yourself. This can become an impossible task unless you were to simply purchase 1 Bitcoin, hold it for 10 years and sell it. It is best to look into crypto tax tracking software early on to avoid this nightmare, there are a few, but Accointing does a very nice job. It also provides a “single pane of glass”, like an HP OpenView, Zabbix or some other type of network monitoring software where you can see all your crypto assets in one screen.
While the government wants their share of your profits, those in the U.S. and maybe other countries can also write off your loses and this is where I’m going to show you the light. In the U.S. there is a rule known as the “wash sale” rule, which means if you purchased for example 10 shares of Apple stock at $100 and the next day it dropped to $20 per share, if you sold the stock, you could write off the loss, but only if you didn’t re-purchase the shares you sold for a period of 30 days. The US federal tax authority refers to crypto as an asset, not a security, so here we have an advantage. In the same scenario, if you purchased 10 Bitcoins and the next day they dropped by 50%, you could sell them and then purchase them back shortly after. You can deduct the loss on your taxes and still be in the same position. These losses might help offset other capital gains in other areas like stocks for the time being. I think this is a great strategy if you really feel crypto will do well over the long term. This is referred to as tax loss harvesting and can be quite beneficial.
CeFi / DeFi
While Bitcoin has been around for over 10 years now, and can be seen as a currency or asset like gold, what is fairly new to the scene is CeFi (Centralized Finance) and DeFi (Decentralized Finance). You can stake certain crypto assets to earn interest in-kind, or you can lend crypto assets and generate yield. You can also take loans against your crypto assets as well. There are a few platforms to check out, Celsius Network being one of them. You can generate up to 10% interest on some coins, so it’s worth investigating. Instead of just HODL‘ing your crypto, you can put it to work for you to earn passive income.
Celsius Network – Create an account, transfer $400 of crypto and get $40 free bitcoin
BlockFi – Create an account, transfer $100 of crypto and get $10 free bitcoin
Nexo – Create an account, transfer $100 of crypto, earn interest and get $25 free bitcoin
Security
One thing I want to address is that Bitcoin is only pseudonymous and not anonymous. With a Bitcoin address, anyone can use a block explorer to look inside a wallet to see how much Bitcoin is there and who they sent it to or received it from. On the top of my site, you’ll see I posted a Bitcoin QR code which resolves to the public key of bc1qlejmxk0kav0zad7p06n3txdk67uqwhtgk5n578 that someone could donate to if they wished. It is easy to see how much Bitcoin is in that wallet address with a block explorer as I mentioned and you can click here to see as an example. Right now there is nothing in there, but maybe some kind soul will put a few Satoshi’s (fractions of a Bitcoin) in there so others can see how this works. What this means is that if someone can tie your Bitcoin public address to you, they can see how much Bitcoin you have and where you have sent or received it. Because of this, I am partial to Monero for transactions that you would make more commonly between parties. I’ll write about Monero more in another blog post, but consider Monero the HTTPS version of Bitcoin (which operates more like HTTP). With Monero, others can’t see how much you have, where you sent it to or received it from.
Another area of security to address is keeping your crypto assets safe. You’ll want to understand the difference between keeping your crypto on an exchange like Coinbase vs keeping it in you own wallet (software or hardware). There are pro’s and con’s to both approaches, so you need to really understand the differences so you don’t loose your money. Exodus offers a software wallet that runs on your PC and mobile device and offer a good break down on the different type of wallets.
Common FAQ’s
Do I need to purchase a whole Bitcoin? No, this is a common misconception and keeps people from buying because most people wouldn’t spend $50,000 USD or whatever the current price is for a single Bitcoin. You can purchase fractions of a Bitcoin, so if you only want $10 USD worth, that is possible. This allows you to dollar cost average into crypto which is an investing strategy.
Why do people think that Bitcoin will continue to increase in value? Money is considered valuable when it is scarce in quantity and there is a demand for it. As an example, cigarettes in prison can be considered money as it’s something of value and in scarce supply (I’m guessing here, I’ve never done any time).That is why the fiat money that governments produce does not retain it’s purchasing power, it’s easy for them to print more at a negligible cost.This is related to what’s known as the “stock to flow“. You can learn more about it in the book I mention earlier, The Bitcoin Standard. The short answer for why Bitcoin will continue to increase in value is that the amount of new Bitcoin being produced (the flow) relative to the amount that exists (the stock) is very small and will further decrease over time. Another example would be gold which is hard to mine and what is mined is a small fraction compared to the existing stock. That is why gold retains its purchasing power.
This sounds like a lot of work. I just want to buy a coin with a dog on it and get rich. Sorry, but I can’t help you.
Let me close with mentioning that it’s a good idea to keep track of your finances in general, not just crypto. If you’re not already familiar with the tool Personal Capital, I would recommend you to start using it. It helps you get a very clear picture of your finances and is a great tool. You can access it via a browser or via an app on your mobile device.
What are your thoughts on Bitcoin and crypto in general? What about the existing money system? Has this post made you question or re-affirm your view point? I’d love to hear your feedback!
In this blog post I’ll be detailing my walkthrough of the process of sharing free/busy information between Google Workspace (G-Suite) and Microsoft Exchange Online.
Google runs the show in this setup and the process is laid out here: Allow Calendar users to see Exchange availability data – Google Workspace Admin Help. I needed to test the procedure as it would be needed for a project I’m working on regarding a new acquisition. The setup I used is the same one I used for the Google Workspace to Exchange Online mailbox migration, so you can get the details regarding the setup from that blog post, but a quick summary is as follows:
I setup one trial M365 tenant and one trial G-Suite Workspace like this:
gwtoexo.ml – Exchange Online domain
acquired.ml – Google Workspace domain
After I created the trial and setup the domains properly in G-Suite and M365, I created some test users on the Google side as well as in the Microsoft tenant. Office Space characters make the best test users in my opinion….
The Google Calendar Interop supports both basic authentication for Exchange as well as OAuth 2.0 for Exchange Online. While OAuth 2.0 won’t work for on-prem Exchange, basic authentication does work for Exchange Online and might be the better approach depending on your viewpoint.
In the first part of the Google documentation “Allow Calendar users to see Exchange availability data”, they go over how the Exchange users should be setup and configuring the Google Interoperability for Calendar. In Step 1, they detail that your Exchange users will need an associated mailbox as well the G-Suite users on the other side. My testing of the free/busy was right after I migrated some users from G-Suite to Exchange online, so here make sure you turn the Google Calendar off for the migrated user as they will still have a mailbox in G-Suite after a G-Suite to Exchange Online migration if you are using the Microsoft migration method. Also, because I did a migration earlier, a user domain alias was previously setup for the users in G-Suite.
If you only need basic availability times returned without any event details, you can skip the step for “Turn on full event detail lookups”. If you want limited event details, you’ll need to run the following per Google’s documentation:
You can expose limited event details for an individual mailbox using the following command:
For my testing, I ran this across all the mailboxes:
The Step 2 regarding Exchange Internet connectivity in the Google documentation can be skipped. If you are using Exchange on-prem and want to lock down IP ranges that can connect, Google provides these details.
Step 3 is where you need to create the Exchange role account. This account is a low privileged account with an associated user mailbox. Think of this account as a proxy for performing the free/busy lookups. By that I mean, within an Exchange organization, Bob can do a free/busy query for Alice, but Joe from Google can’t currently. However, in the basic authentication scenario, we will be providing the Exchange role account credentials to Google. In that way, when Joe at Google wants to get free/busy information for Alice, Google “logs in” as the role account and gets the free/busy information for Alice, just like Bob could within the Exchange organization. I tested both basic authentication as well as OAuth 2.0 so we’ll go over both. First though, basic authentication.
For production use, you want to ensure this accounts password does not expire after 30 days, etc. Also, if MFA is turned on, you’ll need an exception for this account if using basic authentication.
Now we can jump to Step 5 as we’re not doing the OAuth 2.0 part right now. Sign in to your Google Admin console: https://admin.google.com/ as your admin account that has Super Admin permissions.
Click on Apps -> Google Workspace -> Calendar and then click Calendar Interop management.
Check the box here to Enable Interoperability for Calendar and select Exchange Web Services (EWS) under Type as I show below:
You’ll enter the Exchange Role Account email address and then as we are testing Basic Authentication first, ensure that option is selected and click on Enter Password and enter the password for this account. Once you did that it should look like this:
You then have the option to select Show Event Details to get more info or an option regarding Room booking, but for just basic information, leave these unchecked and click SAVE.
In the second part “Allow Exchange users to see Calendar availability data”, we create the Google role account and generate some PowerShell code which we’ll run in the Exchange environment to add the availability space. Yes, having Google generate PowerShell code that runs in your Microsoft environment is unsettling…. I understand, but we must continue on…
There are five steps listed here, we can skip steps 1 and 2 because I’ve already setup the Google Workspace users and there are no concerns about Internet connectivity. One note on step 1 though, on the Exchange side it’s OK to use a Mail-Enabled User vs a mail contact. It will work in the same way.
In Step 3, we create the Google Role account. This is the account that Exchange will use to act as a proxy to retrieve free/busy times for the users on the Google side. This should be a regular low-privileged account with Calendar enabled.
Open the following link and click Execute for the Exchange authentication credential generation tool: https://calendar.google.com/Exchange/tools/ while logged in as your super administrator account.
Check the box to acknowledge “I understand that regenerating these credentials will revoke any old credentials for the Google Role Account” and click Generate new credentials.
At this point, you will be prompted to sign in as the Google role account to generate this new credential. Worth mentioning again, sign in as the Google Role account!
After hitting Next and signing in, you’ll need to click Accept on the Terms of Service.
You’ll then click Download and save this credentials.dat file in a safe place (you only get this one shot to download them.)
The credentials.dat file is just a simple file that looks like this:
Now we move onto Step 4 which will ultimately add the availability address space to Exchange.
While still logged in as your super administrator account, go back to this page: Google Calendar: Calendar Interop Tools and click Execute on the Exchange Server configuration.
You’ll end up on this screen where you will click Choose File and navigate to and upload the credentials.dat file that you saved earlier. Also, make sure Exchange 2013 or newer, including Office 365 is selected.
Enter the email address of the Exchange role account that Google will use to contact the Exchange environment and the Google availability address space and then click Show Exchange setup. Note: In my testing I used a domain alias gsuite.acquired.ml as the availability address space as I was doing some other different testing as well, but if your G-Suite and Exchange don’t have overlapping email addresses, just use the main domain of the Google Workspace, in this case it would have been just acquired.ml.
The Google tool now shows the following configuration information as well as the PowerShell code snippet to run in the Exchange environment.
You’ll want to copy and paste this code shown into an Exchange PowerShell window logged in with the correct Exchange permissions to add the availability address space.
Now we move onto Step 5 where we wait or restart our Exchange Server. For fun, create a ticket with Microsoft to restart the Exchange Online worldwide… just kidding, give it a little time and it will be fine.
We can now move to the last step which is “Verifying the user availability setup.”
The first time I ran the availability lookup tester tool, I got this error:
This is basically saying that Google couldn’t reach the Exchange server. Why? Because with new tenants, Microsoft enables “Security Defaults”. So I could complete my testing, I disabled this, but more information from Microsoft about this is located here: Disable Basic authentication in Exchange Online | Microsoft Docs Make sure you understand the consequences of doing this.
Now when I test, I got a successful result.
The real testing though is from the end users perspective, so let’s take a look.
Here, Bob Smith who has a mailbox in Exchange makes a calendar entry in his Outlook:
Bob Slydell who has a mailbox in Google Workspace can see that Bob Smith is busy from 9:30 to 11PM. It only shows Busy since we elected not to share event details during the setup.
Bob Smith creates some more meetings in his Outlook calendar:
Bob Slydell in G-Suite again has no problem seeing that Bob Smith is busy:
Now let’s look in the other direction. Bob Slydell has some calendar entries in his Google calendar:
When Bob Smith who is on Exchange uses the Scheduling Assistant to query free/busy times for Bob Slydell, he sees the following (Note: it works for bob.slydell@gsuite.acquired.ml and not bob.slydell@acquired.ml. This is because how I set the Google availability address space. With non-overlapping domains use the main domain for the Google availability address space, i.e. acquired.ml):
Here you can see Bob Slydell’s Google calendar that shows the event details, i.e. at 9am where Bob Smith is looking it shows “Don’t call me” vs just Busy which appears to Bob Smith.
I’ll be adding to this blog post at a later time to show how this can also be setup using OAuth2 vs basic authentication and explaining the pro’s and con’s of each approach.
In this blog post I’ll be detailing my walkthrough of the process of migrating mailboxes from the Google Workspace (G-Suite) to Microsoft’s Exchange Online. Microsoft used to perform this process themselves via the Fasttrack process where they used the Quest tool on the backend, but now they’ve made it available for IT staff to do themselves. Let’s get started!
The process for the migration is laid out here: Perform a Google Workspace migration | Microsoft Docs. I needed to test the procedure as it would be a new way of migrating Gmail users to Exchange vs the Fasttrack/Quest solution used in the past. I recently did a blog post on Exchange Online to Exchange Online mailbox migration and while I did run into some issues with this Gmail migration, it was much less than the Exchange Online to Exchange Online migration. The problems that I did run into I’ve documented, and I hope that what I learned will help others if they encounter these issues. I also discovered that there are migrations options that aren’t detailed in the official Microsoft documentation, for example, Microsoft states that the objects on the target side must be MEU (Mail-Enabled Users), this isn’t necessarily true.
For testing, I registered one free domain name using the Freenom service for the Microsoft 365 EXO side: gwtoexo.ml
For the Google side, Google is very persnickety and doesn’t like newly created domain names because they think you’ll spam everyone, so instead of a brand new one which I had tried first, I had to use an old domain name I had registered years ago, but for this article we’ll call it acquired.ml. More on this in a short bit…
I then setup one trial M365 tenant and one trial G-Suite. Microsoft is easy because you don’t even need a custom domain name to setup the Microsoft side. Google is a different animal, and this is where I ran into my first problem. While you can register for a free 14 day G-Suite trial with a credit card using this link: https://workspace.google.com/google/workspace , Google as I mentioned doesn’t like it if you use a “new” domain name, since you might be a spammer and would be abusing their service. Really annoying, so as I mentioned, I used another domain I had registered some years ago, and I was able to get things going with that domain. Again, for this article we’ll call it acquired.ml even though that wasn’t really it in my testing.
So, my test setup was now like this:
gwtoexo.ml – Exchange Online domain
acquired.ml – Google Workspace domain
After I created the trial in G-Suite and M365 and set up the domains properly in M365 and GSuite, I created some test users on the Google side as well as Microsoft tenant. Office Space characters make the best test users in my opinion….
To have some mail to migrate, I populated the user’s mailbox via the Mailbait service.
I let this bake for a day to get a decent amount of mail in the mailboxes for the good folks at Acquired. I will say that Google has decent spam filtering, only a small amount of mails got through.
One of the big things I was trying to figure out was whether the target side really needed users to be setup as MEU’s (Mail Enabled Users) as specified in Microsoft’s documentation:
Spoiler Alert: This isn’t exactly true based on my testing! More details later…
Another thing I didn’t fully understand was this part in the Microsoft documentation, because when exactly does this switch from MEU to Mailbox happen and what happens if the migration fails? Can you resume it if it’s now already a mailbox? All this will be answered shortly!
Also, the Microsoft documentation talks about setting up routing addresses for use during the migration, so I setup acquiredml.gwtoexo.ml for users on the M365 side and gsuite.acquired.ml for users on the Google side. Basically, all the magic happens with mail forwarding to make it a seamless process for the co-existence and migration period. The documentation further talks about it here:
After you are setup (likely you already have a Google Workspace that you’re migrating from as well as a M365 Exchange Online environment), it’s time to get started with the migration setup, so let’s get this going.
The first thing you need to do is on the Google side, which is to create a Google Service account. From the Google Cloud Platform while logged in as a Google Workspace Admin, go to APIs & Services and Manage Resources.
You’ll see the following the following, click CREATE PROJECT:
In the New Project area, enter a project name and then click Create.
You’ll then see the following:
Now, go to IAM & Admin and select Service Accounts and select the project we created earlier:
Click on Create Service Account
Enter the Service Account Name and click Create and Continue.
For the step 2 “Grant this service account access to project” click Continue:
Finally, click Done for the step 3, “Grant users access to this service account”
You’ll now see something like this:
Hover over the Email field and click on it like here:
You’ll now see this screen, make note of the Unique ID, this is important. This is the client id used for the oAuth scope which we’ll need when we work on the EXO part.
A little further down the screen you’ll see a checkbox for “Enable Google Workspace Domain-wide Delegation”, you’ll need to check this box and enter the product name for the consent screen and click Save:
Next we need to create a new key pair. While on the IAM & Admin page, Service Accounts, click the Keys tab and then click Add Key and Create new key.
For the Key type, select JSON and then click Create.
You’ll then download the key file with a funny name, mine was thematic-garage-321214-cd3c5432b7f5.json
We now need to enable API usage for our project. You’ll want to go to the Developer page for API library here: https://console.developers.google.com/apis/library and sign in as the Google Service account you just created. You’ll want to search for and enable each of the following APIs:
Gmail API
Google Calendar API
People API
Google Contacts API (this one is deprecated, but I enabled it)
If you want to verify and see what APIs are enabled, you can do so as documented here:
In my setup, it showed the following:
The next major step is to grant access to the service account for the Google tenant. Make sure you go to the right control page, the Google Workspace Admin page: https://admin.google.com/AdminHome and sign in as the Google Workspace admin account and follow the instructions:
Click Security, then API Controls, and then Manage Domain Wide Delegation.
This is one area where’s Microsoft’s documentation was out of date. To help you, the setting for Domains is under Account. So instead, we are going to click Account, Domains, Manage domains and then Add a domain.
After you click Add a domain, you’ll enter the routing domain, in my case I used: acquiredml.gwtoexo.ml and used a TXT record to verify the domain ownership.
After you add the TXT record to your DNS, you can verify the domain.
Now it’s important to create an MX record in your DNS provider for the domain you just verified to point to your M365 Exchange Online, in my case:
acquiredml.gwtoexo.ml MX preference = 0, mail exchanger = acquiredml-gwtoexo-ml.mail.protection.outlook.com
Further and just as important, add this domain in your M365 tenant so it shows as an accepted domain:
We now need to create a subdomain in Google for mail routing to our Google Workspace.
In my case I created the domain alias as a subdomain of my primary Google Workspace domain, so I entered: gsuite.acquired.ml and then click Continue and Verify Domain Ownership.
Because this is a subdomain of the already verified primary domain, it should be automatically verified.
Setup the MX records for this domain alias in your external DNS provider like you did for the routing domain you created in EXO so that mails to this domain alias will be routed to your Google Workspace.
You’ll also notice that your users in G-Suite have this domain alias automatically applied to them as an alternate email address:
Now would be a good time to verify that mail flow works between the routing domains on each end as this is a critical part of the co-existence period while you are migrating users from G-Suite to EXO.
I did run into an issue here. The problem was that in Exchange Online, the mail forwarding wasn’t working. It seems new tenants have restrictions in place as I was getting this error:
Microsoft put this into place to prevent BEC (business email compromise) and data exfiltration, however for my testing, I needed this working, so I modified the settings to “On – Forwarding is enabled” in https://security.microsoft.com/antispam as shown here:
You may get this error though, so just run Enable-OrganizationCustomization from PowerShell window connected to EXO:
Also, I created a new remote domain for the gsuite.acquired.ml routing domain to Google Workspace with AutoForwardEnabled set to True:
The official Microsoft documentation discusses provisioning users in M365/O365 at this point, however we have already done that, and you probably have in your environment as well. If not, you’ll need to create MEU (Mail Enabled Users), however as you’ll read later, technically it will work if they have a mailbox in M365 (with some caveats). The important part of this section I’ve highlighted below:
What this means is that in my example, the mail-enabled user Michael Bolton whose primary SMTP address is michael.bolton@gwtoexo.ml should have a ExternalEmailAddress of michael.bolton@gsuite.acquired.ml and a proxyaddress of michael.bolton@acquiredml.gwtoexo.ml.
Now the fun part starts as everything is setup at this point and we see where the Microsoft documentation breaks down leaving some head scratching which I hope to address! J
Start a Google Workspace migration batch with the New Exchange Admin Center
If you follow the Microsoft documentation as written, it states the following:
In step 6, they said to select the migration endpoint from the drop-down menu, this just isn’t going to happen folks, there is nothing to select.
Further, if you try to Create a new migration endpoint, it ends in despair:
It blows up spectacularly like this:
There are also instructions on how to try this with the Classic EAC, but I’m going to spell out what you should already know from the Exchange 2007 days…. just use PowerShell… Follow the instructions further down on Starting a Google Workspace migration with Exchange Online PowerShell and you’ll be set.
Here, I show testing the Migration Server Availability and creating the Migration Endpoint which you then could select using the GUI…seems they got ahead of themselves or something putting the documentation together.
Now we can see and select the migration endpoint:
But, if you follow the GUI further down the road, we run into issues:
You can’t select or enter the target delivery domain…seriously just give up on the GUI and use PowerShell….
The format for the users to be migrated should be in a CSV format like this:
Put the users in a CSV file as I did here named “userstomigrate.csv” and run the New-MigrationBatch cmdlet as shown. You’ll see it initializes in the Stopped state.
We can now start the migration batch and check it’s progress:
When you start the migration batch, it’s at the very beginning that the magic happens. At this point, the mail-enabled user in EXO becomes a mailbox and the forwarding on this mailbox gets set to the users G-Suite mailbox’s routing address (gsuite.acquired.ml):
Here’s a view from the PowerShell side of another user where the migration just started and the forwarding got set on the EXO mailbox:
Shortly after, the mail, calendar entries, contacts and rules will start to populate the Exchange Online mailbox. Only these items will be migrated, you won’t see Tasks migrate or anything else. If you’re logged in to the mailbox you can see this happening realtime. The reason we want this forwarding in place is because the user is still using the G-Suite mailbox as their primary, we are just “staging” the email migration for the user at this point, so any mails going to this EXO mailbox will get sent over to G-Suite where the user can work on them.
As a side note, In the past I would use move requests with the -SuspendWhenReadyToComplete flag, however that isn’t available for the New-MigrationBatch cmdlet. A workaround is to use the -CompleteAfter switch and set it to some date far into the future. Then you can complete it when you are ready with the Complete-MigrationBatch cmdlet.
At some point, the migrationbatch request will have the mailbox in a fully sync’d status so that you’re ready to complete the migration batch when the user is ready to migrate. Michael Bolton is ready to migate, so let’s pull the trigger.
Checking on the progress, the migration batch is doing the final delta sync and completing the migration.
When the migration batch is in the Completing stage, the forwarding in EXO gets turned off:
Further, the forwarding gets turned on in the Google Workspace environment (Note: it gets set to forward AND keep a copy in Google Workspace!)
Now that the migration has completed, mails going to the users Google Workspace are automatically forwarded to the routing address for the Exchange Online mailbox and delivered to the mailbox.
Rules migrate nicely for the users too. I later on did a migration for Samir and you can see the rules (filters in Gmail lingo) got migrated to Exchange without any issues:
Now to address some of the interesting questions and discoveries I made.
Question: What happens if you start the migration batch and then it gets killed or errors out and you need to restart the migration batch. Answer: This is one of the first questions I had because the Microsoft documentation states that the user must be a mail-enabled user on the Microsoft side originally, BUT it converts the MEU to a MAILBOX at the BEGINNING of the migration….so….what happens if there’s a problem along the way. I can happily report that restarting the migration batch works just fine, it doesn’t complain that the user already has a mailbox and there is no duplicated mail or any issues like that.
Question:What happens if you try to migrate a user who is already a mailbox on the Microsoft side? Answer: The New-MigrationBatch command is accepted as is the Start-MigrationBatch and mails, etc. are migrated from Google to Microsoft. However, the forwarding is never set once the migration starts on the EXO mailbox, so you’d have to set this manually if you already have users with mailboxes in EXO. When the migration is completed for the user, the forwarding IS set in G-Suite.
Question: What happens if you manually set the forwarding for a user who already has a mailbox EXO? Answer: In this case, it’s similar to the previous question, but here you are manually setting the forwarding. When the migrationbatch completes, it sets the forwarding in G-Suite AND removes any forwarding in EXO (even though it didn’t set it itself).
Question: What happens with Gmail filters and Gmail labels? Answer: Outlook rules are created from Gmail filters. Outlook folders are created from Gmail labels. Note though that mails with Gmail labels are duplicated by putting one copy in an Outlook folder with the label name and the original in the Inbox.
Question: What happens to mails in the Gmail Trash? Answer: It migrates the items in the Gmail Trash to the Outlook Deleted Items.
Question: What if I’m doing co-existence with free/busy queries between G-Suite and EXO? Answer: I’m working on that blog post next, but it’s a good idea to disable the calendar for the migrated G-Suite user.
Question: Anything else I should do after I migrate the user? Answer: Yes, don’t forget to license the user in Exchange Online, otherwise they’ll turn into a pumpkin after 30 days.
Question: What to do after I’ve migrated ALL the users? Answer: After you’ve migrated ALL the users, you’ll want to add the primary domain for the acquired company to M365 and verify it so that it is also an accepted domain in Exchange Online. Once that’s in place, you can switch the MX records for the G-Suite domain to point to Microsoft 365 and decommission the Google Workspace environment and relax.
I’ve spent most of my working life migrating mailboxes, cc:Mail DB6 to DB8, cc:Mail to Exchange, Notes to Exchange, Gmail to Exchange, on-prem Exchange to on-prem Exchange and most recently from one Exchange Online tenant to another Exchange Online tenant. This capability was not natively available from Microsoft originally. I was able to be part of the early adopters doing this under NDA back in 2019 where Microsoft would do some magic on their backend to allow this capability.
Now however, the capability is available via public preview and is possible for all customers to perform without any special involvement from Microsoft.
The process for Microsoft cross tenant mailbox migration is laid out here: https://docs.microsoft.com/en-us/microsoft-365/enterprise/cross-tenant-mailbox-migration?view=o365-worldwide and as stated, is currently in preview. I needed to test the procedure as it had changed from when I did it back in 2019, so I thought this would make for an interesting blog post. I will also state that this isn’t an easy process. I ran into lots of problems and documented them. My hope is that what I learned will help others if they encounter these issues. If it all worked as designed there really wouldn’t be a need for this blog post, right? Let’s get started!
As a test, I registered two free domain names using the Freenom service:
sourcedomain.ml
targetdomain.ga
I then setup two trial M365 tenants:
sourcedomainml.onmicrosoft.com
targetdomainga.onmicrosoft.com
After setting up the domains properly in M365 and getting the custom domain name registered properly I created some test users in the sourcedomain.ml tenant.
To have some mail to migrate, I populated the user’s mailbox via the Mailbait service.
I let this bake for a few hours to get a decent amount of mail in the mailboxes for the good folks at Initech. Now that everything has been prepped, we can start the show!
Note: You might see the source tenant referred to as the resource tenant in Microsoft documentation.
If an existing Azure Resource Group is not provided, a new one is created (SCRIPT).
If an existing Key Vault is not provided, a new one is created (SCRIPT).
A new Access Policy is created for the Office 365 Exchange Online Mailbox Migration application (SCRIPT).
A new certificate is created (or existing one, if specified) to hold the secret to the Migration application (SCRIPT).
A new Azure AD application is created (SCRIPT).
The certificate/secret is uploaded to the migration application (SCRIPT).
Mailbox migration permissions are assigned to the application (SCRIPT).
The deployment script pauses until target admin consents to their own application (SCRIPT).
The target tenant admin consents to the permissions given to the application (MANUAL).
An organization relationship is created to the target tenant (SCRIPT).
A migration endpoint is created to pull mailboxes to the target tenant (SCRIPT).
Prepare the source tenant:
The source tenant admin accepts consent to Mailbox Migration application invitation from the Target tenant (MANUAL).
The source tenant admin creates a mail-enabled security group in their tenant to contain the list of mailboxes allowed to be moved by the migration application (MANUAL).
An organization relationship is created to the target tenant specifying the mailbox migration application should be used for OAuth verification to accept the move request (SCRIPT).
Let’s go ahead and open up a Powershell window and connect to Exchange Online of the target tenant.
We know the ResourceTenantDomain, ResourceTenantAdminEmail, TargetTenantDomain, and ResourceTenantId; but what about the rest?
Let’s start with the SubscriptionID. Open the Azure Portal and under all services, click subscriptions, you’ll see it’s empty. (Note: If you’re using an existing tenant with an associated subscription, you should have a subscription ID present already. I did not since this was a brand new install.) Verifying with the Azure cloud shell, we can see the following:
Click Add on this page:
And it will take you here:
Free Trial looks good, so let’s select that. You’ll have to fill out the information to create a free Azure account.
You should now have a default subscription like this:
And have $200 in free Azure credits to play with:
Before we run the command to prepare the Target Tenant, let’s do a few things first.
Open the Azure cloud shell and create the necessary storage within our Azure subscription 1 that was created.
Let’s accept the defaults and click Create Storage
After about a minute (it’s the cloud…), you’ll be presented with the Azure Cloud Shell.
Back to our opened PowerShell window that we connected to Exchange Online and Azure with, we will now run our command:
When I ran the command, I received the following error text:
Import-AzureModules : Missing modules – Powershell module: [AzureRM.Insights] minimum version [5.0.0] is required for running this script. Please install this module using: Install-Module AzureRM.Insights -AllowClobber Powershell module: [AzureRM.KeyVault] minimum version [5.0.0] is required for running this script. Please install this module using: Install-Module AzureRM.KeyVault -AllowClobber Powershell module: [AzureRM.Profile] minimum version [5.0.0] is required for running this script. Please install this module using: Install-Module AzureRM.Profile -AllowClobber Powershell module: [AzureRM.Resources] minimum version [6.0.0] is required for running this script. Please install this module using: Install-Module AzureRM.Resources -AllowClobber
Microsoft stated in their documentation to ensure you have installed the Azure AD PowerShell module prior to running the scripts, but here we see it also needs the AzureRM modules, so let’s install those.
I ran the following commands:
Install-Module AzureRM.Insights -AllowClobber
Install-Module AzureRM.KeyVault -AllowClobber
Install-Module AzureRM.Profile -AllowClobber
Install-Module AzureRM.Resources -AllowClobber
After this, I ran the command below again and this time, a window appears, asking you to login with the target tenant credentials.
Login-AzureRmAccount : Method ‘get_SerializationSettings’ in type ‘Microsoft.Azure.Management.Internal.Resources.ResourceManagementClient’ from assembly ‘Microsoft.Azure.Commands.ResourceManager.Common, Version=4.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35’ does not have an implementation.
Resource Group Cross-TenantMoves successfully created
New-AzureRmKeyVault : The vault name ‘Cross-TenantMovesVault’ is already in use. Vault names are globaly unique so it is possible that the name is already taken. If you are sure that the vault name was not taken then it is possible that a vault with the same name was recently deleted but not purged after being placed in a recoverable state. If the vault is in a recoverable state then the vault will need to be purged before reusing the name. For more information on soft delete and purging a vault follow this link https://go.microsoft.com/fwlink/?linkid=2147740. At C:\temp\crosstenant\SetupCrossTenantRelationshipForTargetTenant.ps1:315 char:15 + … $kv = New-AzureRmKeyVault -Name $kvName -Location $kvLocation – … + CategoryInfo : CloseError: (:) [New-AzureRmKeyVault], CloudException + FullyQualifiedErrorId : Microsoft.Azure.Commands.KeyVault.NewAzureKeyVault
I tried closing Powershell, opening a new PowerShell window, connecting to Exchange Online and then running script again, however it resulted in the same error, complaining that the Key Vault name already exists when it doesn’t (or I didn’t think so anyway!). Microsoft it seems in their infinite wisdom decided that Key Vault names are NOT unique per tenant. This means just like the NAME.onmicrosoft.com when you setup your tenant, if it exists already globally within Azure, forget it, you can’t use it. How annoying! To be fair, now that I look back, if I had read the error more carefully, it does say that the vault names are globally unique, but in the heat of troubleshooting, I missed this on the first pass.
The other thing was that I had to create the KeyVaultStorageGroup manually.
Note that the storage account name must be unique too, I finally secured ttttstorageaccount after trying tstorageaccount, ttstorageaccount, and tttstorageaccount. This is going to get fun as the cloud gets more popular!
The uniqueness issue was more apparent to me when I was trying to manually create the storage in the Azure portal and it kept telling me the name was taken and then trying different names, until it didn’t complain anymore.
After I finally got the KeyVault setup, I ran the setup script again and received this error:
To fix this issue, I did the following:
It will take some time saying “Registering” but finally will show:
Running the script again, I got this error, however it’s one I expected. I highlight this because I wanted to show as it’s not documented in the Microsoft documentation, that the resources should all be in the same region:
To move things along and get past this error, as the storage account was setup for Geo-replication, I did a failover to the West US region to get it in the same region as the KeyVault:
Running the script yet again, it moves further along and it now gets to the point where it asks for admin consent for the target tenant. Copy this URL into a browser window and login. You’ll then grant consent:
Click Accept and then switch back to the Powershell window and press Enter to continue setup.
It will send consent to the source (resource tenant) but then runs into another issue while running the command Get-OrganizationRelationship as shown below:
Note however, it continued along and did create the endpoint:
So now, how to fix hopefully this last error, “The command you tried to run isn’t currently allowed in your organization. To run this command, you first need to run the command: Enable-OrganizationCustomization.”
If you’ve been following along this far, you’re probably dehydrated, go get a coffee, tea, water, maybe a beer at this point…
Now it runs completely without issue:
To be sure it’s complete, we can run again and see the “good” error:
Let’s run the whole script for the 99th time now and see what happens!
Copy the admin consent URL into a browser and hit enter, you’ll see the client id is different now:
After clicking “Accept” again, go back to the Powershell window and click Enter.
Finally, the first clean run of the script on the target side!
The target tenant setup is now completed!
Setting up SOURCE (RESOURCE) Tenant.
After the target tenant admin consents via the URL, it sends an email to the source (resource) tenant admin to accept the invitation to allow the target tenant to PULL the mailboxes. From my testing, since I ran this twice, I have two such emails. I’ll do the accept on the latest email:
Clicking Accept invitation opens a browser window with the following:
Click Accept.
We now need to setup a mail-enabled security group to control the mailboxes which will be allowed to be pulled from the source to the target. You can do this in the Exchange Admin Center or via Powershell.
It’s not necessary to populate the group right now, just that it exists is fine. It might take some minutes to create…
We can now proceed to open a Powershell window and run the script to setup the source tenant.
Connect-ExchangeOnline as an Exchange Administrator or Global Administrator to the source tenant.
Here again, be sure to stay hydrated! 😊
The cloud might fight you:
Try again and keep running until you get the “good” error that it’s already enabled.
Finally!
The source (resource) tenant setup is now completed!
Now, it’s time to verify the setup.
From the targetdomain.ga side:
From the sourcedomain.ml side:
We can also run the provided VerifySetup.ps1 script, however for me this didn’t work, encountered lots of issues.
Running the verify script on the source results in this (OK on the Organization Relationship, but it complains the appid isn’t registered in the source tenant, which I think is to be expected as it’s registered in the target tenant!):
Running the verify script on the target results in this (it just refuses to accept the ApplicationKeyVaultUrl):
Even though the provided Microsoft verification script didn’t help me, I’m confident enough that in my manual checks all is setup as it should be, so let’s continue onto the Exchange mailbox migration.
First, we need to prepare the user objects to be moved.
Here’s the users in the source tenant that we have to migrate:
To flag these users for migration, I will run the following to set customattribute1 to the value T2T:
The red highlighted part should be on a separate line!!!
After you fix that and get through that hurdle, you should have a file that looks like this if you just run the export user’s part:
We now need to create the MEU (Mail enabled users) on the target side. Again, be cautious of the script example! There are issues with the spacing and some parameters trying to be set depending on your scenario, example: “-organization” isn’t a parameter, $x500 should be on separate line, etc.! The larger issue is that the provided script is only for on-prem Exchange and you will run into issues getting it to run cloud only. For now, we’ll just create a MEU manually in the target tenant.
MEU Before stamping with necessary attributes:
MEU After stamping with necessary attributes (CustomAttribute1, ExchangeGUID, x500 address):
We also need to set the targetdeliverydomain (in this case I made it sourcedomainml.targetdomain.ga) and ensure the other attributes are in the correct way:
The targetdeliverydomain is nothing more than a routing domain that will be utilized to transport mails from the source tenant to the target tenant once the user is migrated. In this way, users on the source tenant who haven’t migrated yet will still be able to email the migrated users. A subdomain of the target domain works very well here.
You can setup this subdomain with a MX record in FreeNom DNS very easily with an A and a MX record like this:
Then confirm the MX record in M365.
Once that’s done and the targetdeliverydomain is registered properly in M365 and set in the users proxyAddresses, we can kick off the move (it might take a little while for the domain to show as an accepted domain in EXO). Note we reference the identity of the mailbox to be migrated by the ExchangeGUID in the source tenant.
Peter Gibbons is finally leaving Initech!
As it’s a small mailbox, it moves quite quickly:
For demonstration purposes, you can see at this stage, Peter is a MEU still in the target domain. Let’s complete the mailbox move and you’ll see that the move is completed and Peter now shows as a mailbox user in the target domain!
Looking over at the source domain we can see that Peter Gibbons is no longer a mailbox user, but instead he got converted to a MEU.
More importantly, you can see that the MEU for Peter also got assigned via the ExternalEmailAddress the targetdeliverydomain so that mails sent to Peter in the source tenant will route to the target tenant. In this way, users can still communicate no matter what stage of the migration is going on currently.
Peter’s mailbox is currently unlicensed:
We will go ahead and fix that:
It’s actually a good idea to apply the license earlier, but not too early…Don’t apply the license until the target MEU has the ExchangeGUID and x500 attributes applied.
How nice, Peter got a mail from Samir who hasn’t been migrated yet (showing that the targetdeliverydomain routing is working as expected).
Samir got a nice reply from Peter back:
All that’s left is to setup a migration batch job to migrate the rest of the good folks at Initech.
Now if only I could find my red stapler….
As a special note, I’d like to thank my daughter Emily for proof reading this post. I think she’s even more confused what exactly I do for a living but she did help me correct some grammatical errors and speling errors….that’s one’s just killing you isn’t it? 😉
I ran into an issue with my home lab environment the other week that I thought was interesting after figuring out the problem, so let me explain…
I have a HP DL380G7 server running VMWare ESXi that has (8) 600GB 10K SAS drives in a RAID 10 configuration which provides 2.18TB of usable space. With the RAID 10 setup in my DL380 G7, there are 8 physical disks, each disk having a mirrored partner. As long as you don’t loose a pair of disks that are mirrored via RAID 1 you are fine, so technically I could loose upto 4 disks assuming none of those disks were RAID 1 mirrors of any of the others.
This is what a RAID 10 layout looks like:
As an example, I could loose 4 disks like this and still be OK:
Or I could loose 4 disks like this and still be OK:
But it would be GAME OVER if I lost 2 disks like this:
In the scenario that happened to me, I lost this one disk:
Now, I’ve had a disk fail before and being the good IT guy, I of course had a spare 600GB 10K SAS drive just sitting on the shelf for just this occasion. I first tried removing and re-inserting the failed drive, but while it tried briefly to rebuild, it quickly failed again. I removed the failed drive and then inserted the new spare I had and it went amber/orange/agh! on me. This isn’t how it’s supposed to happen!
I figured that the spare I had bought for just this purpose was a dud. These things happen, so I went online and paid more than I should to have a new drive sent out to me via FEDEX Priority Overnight. The drive that failed on me was a model # EG0600FBLSH and that’s what I ordered as shown here:
Now, while I was waiting on the replacement drive, I did contact the eBay seller I had purchased the original spare from about 11 months prior and told them about the situation. Without any hesitation they sent out a replacement drive to me but it would take a few more days to arrive.
The next day, the EXPENSIVE drive that was sent FEDEX Priority Overnight was delivered at 10am. Finally my prayers have been answered, my server was still up and I got my replacement drive. I opened the package, inserted the drive into the server and the drive lights went amber/orange/areyoukiddingme?!
At this point, it’s time to dig in and figure out what the heck is going on because I can’t have 2 bad replacement drives, what would be the odds of that happening? I was thinking maybe it was the P410i controller or the backplane connection but all other drives were fine. I needed to get more detailed info about why the drive was failing to rebuild. Now the first problem is that VMWare ESXi doesn’t show you any great info about the array controller and the disks, just basic info like this:
It’ll show you something is wrong, but no details. That’s where we need to go to the command line! At this point you’ll want to SSH to your ESXi box and for my system, go to the /opt/hp/hpssacli/bin directory and run: ./hpssacli ctrl slot=0 pd all show detail
This gave me the additional details that I needed, to help me eventually figure it out…
It showed that the failure reason was: “Hot plug replacement too small“. Surprisingly there’s not much on the Internet about this error and it seems pretty self-explanatory, but I put in a 600.0GB SAS drive. I found one link here: https://serverfault.com/questions/458804/proliant-server-will-not-accept-new-hard-disks-in-raid-10 that talks about an incredibly rare situation where someone was trying to replace a 73.5GB drive with a 72GB drive but that wasn’t my problem, my other drives were showing 600.0GB and this replacement drive was 600.0GB.
I then decided to put the original failed drive back into the server and see what the array controller was reporting. At first it showed the rebuild:
And then it failed with an understandable error (Mark bad failed):
By this time, the eBay seller’s drive arrived via USPS and I opened the package and inserted the drive into the server. Instead of going amber/orange it went green and showed a status of Rebuilding.
After a short period, the drive was done rebuilding and showed a status of OK!
Now while I was happy, I had to understand why this happened. Was a third time really a charm? Yes, but there was a reason why….
After looking closer I noticed that the first spare drive I tried was a model: EG0600FBDBU. The second spare drive I had ordered was supposed to have been a EG0600FBLSH, but looking at the drive, it was actually a EG0600FBDBU. The drive that had failed originally was a EG0600FBLSH and that’s the one where it did try to rebuild, but failed due to the disk being bad (understandable). The replacement drive that the eBay seller sent me by luck was a EG0600FBLSH and that’s the one that worked. It all made sense now because looking at the drive (2I:1:8) it had a RAID 1 mirror with (1I:1:4), which was also a EG0600FBLSH. All other drives in my server are EG0600FBDBU’s.
Clearly the EG0600FBDBU and EG0600FBLSH are not the same, even though they are listed as such from the seller and other places refer to it as interchangeable:
The drives show the same reported size of 600GB, but the EG0600FBDBU must be ever so slightly smaller even though it’s not reported as such per the failure message of “Hot plug replacement too small”. Maybe the drive would work under different circumstances like JBOD, but in this specific scenario, the array controller wasn’t happy unless it was the same exact model number.
I’ve not seen anyone else report something like this online, so thought I’d document it…
If you run into this problem, or this post helps you, I’d love to hear about it!
I ran into an issue the other day where the Microsoft Exchange Mailbox Replication Service would not start on one of the servers in a DAG. The service would timeout when trying to start as seen in the System event logs. No other real good information to go on. Issue persisted after a reboot as well, so it wasn’t something that was a one-off cleared out by a reboot. AV was disabled as well as some other ideas to no avail.
I started looking at the files thinking maybe there was some corruption and sure enough that was the issue. The file highlighted in yellow was originally named MsExchangeMailboxReplication.exe.config and was only 1Kb in size. Comparing to another server in the DAG, the file should have been 22Kb. Not sure what happened exactly to cause this, but after renaming the file with the appropriate .wtf extension and copying an intact MsExchangeMailboxReplication.exe.config file from one of the good servers in the DAG, the Microsoft Exchange Mailbox Replication Service was able to start and all was good again.
I ran into this issue while doing some re-arranging in my environment. Basically, all my externalURL’s were showing the correct new value after updating their virtual directories, but when running the Microsoft Remote Connectivity Analyzer, it still showed an older external URL for <PublicFolderServer>server.domain.com</PublicFolderServer>.
The solution was to change the OutlookAnywhere ExternalHostname value. You can see my solution posted to the Microsoft TechNet forum as others could not resolve this issue previously. Hopefully this helps someone in the future.
For some time, I had been getting random account lockouts. All the usual suspects had been interrogated, scheduled tasks, drive mappings, credential manager, etc. As the problem started to annoy me more and more, I took a deep dive determined to find the root cause of the issue.
When I would get locked out, the AD domain controller would show that it was an Exchange server actually showing the bad password attempts. The exchange server in question would change, it wasn’t any one in particular and they all sat behind a load balancer.
The first step was to check these exchange servers and make sure I didn’t have any scheduled tasks or the like running on those servers. No such luck.
To be extra sure, I disabled Exchange ActiveSync and Skype for Business on my mobile phone for a few days and saw that the issue persisted. At this point I was sure that something else was causing the bad password attempts, maybe the phone could still be a contributing factor, but even with it disabled, it still occurred.
I started to notice specific behavior though, for example, rebooting my PC would cause it to happen more frequently, probably as the system was starting up. Using the Microsoft Account Lockout tool (https://www.microsoft.com/en-us/download/details.aspx?id=15201) I could see the domain controllers that were getting the bad password attempts, sometimes it caused enough that it caused my account to get locked out.
Seeing the exact time of the lockouts on the domain controllers, I then started examining my system to look for events happening at the same time. Bingo! I found a smoking gun, no bullet yet, but definitely smoke. At the same time the bad password got logged on the DC, I had a 4648 Logon event on my PC showing an Audit Success and it showed the Exchange server where the bad password or lock occurred listed in the Target Server information, just as the DC was showing as well!
Now, I received another clue! It shows the process ID: 0x15c4. Since this is in hexadecimal, I had to convert this to its decimal equivalent: 5572. The process ID or PID is important here because there are many svchost.exe process that run on a Windows computer. SVCHOST.EXE is a process that acts as a container for many services of similar types. You can read up more about it here: https://en.wikipedia.org/wiki/Svchost.exe
Also, some talk about “Project Rome” and “Universal Glass”, pretty interesting stuff.
As a FYI, the characters after the underscore will change it seems after a reboot, so everyone’s system will show something different for those last few characters.
So, what to do next…I wanted to confirm that these services were the cause of the issue I had been seeing, so I put together a quick script to keep these services stopped as disabling them could lead to possible system issues according to some of those posts:
This checks the state of the services every 30 seconds and stops them if they were running. Using this script I was able to see my system had no bad password attempts, except at times when one of the services tried to start up on its own. (The screenshot below shows where the last few characters changed to 84532)
Now it was time to see what was happening when these services were starting up to cause the bad password. I ran Wireshark to capture packets and then manually started the “Sync Host_14f82f” service. Sure enough, a bad password was tried, confirmed via the Lockout tool and in my systems security log with a 4648 event showing the Exchange server it occurred on. Looking at the Wireshark capture at the time of the lockout showed my laptop talking to the Exchange load balancer. Setting up a display filter in Wireshark allowed me to see more clearly the issue.
My system was trying to communicate using IMAP on port 143 to the Exchange load balancer. WHY? I wasn’t using any IMAP software, but rather Outlook connected to Exchange. Looking further I see reference to “Windows Mobile” and ultimately a TLS Fatal Handshake Failure. Clearly, my system was trying to communicate via IMAP and it was failing due to the bad password. What on my system could be using IMAP when these mysterious services were started?
Then it hit me…..I had setup Windows Mail that comes with Windows 10 about a year and a half prior to test IMAP when I was setting up a new F5 load balancer for our Exchange environment. I started up Windows mail and sure enough, it presented me with the following:
“The search is over, you were with me all the while” (Maybe those who grew up in the ‘80’s get the reference). At least I figured it out and I was a survivor. OK, I’ll stop now! Haha 😊
In summary, if you or someone complains that they are having account lockout issues, bad password attempts and it looks like an Exchange server is causing the lockout, check to see if on any of their devices are any forgotten about email clients with an old password configured.
But, in this case, the really interesting thing was that I was never actively running the Windows Mail client! These UniStackSvc services were kicking it off in the background, most likely checking for new mails, etc. from time to time, completely unknown to me.
Hopefully this helps someone experiencing a similar issue.
Normally I’d offer my answer but Microsoft locked this post, so I’m unable to reply…maybe he’ll find this message in the bottle on the Internet someday.
So, long story short, Windows 7 machine using IE11 and IE works fine except when clicking on a Sharepoint team site in Office 365. IE then crashes. Tried the regular steps of resetting IE to default settings, etc. but it would still do this. Other browsers worked fine. Disabled AV, etc. still same issue. Something wrong with IE but hard to say exactly what.
I did find the solution though. I removed IE11 from the Windows machine using the following process which is documented here:
I used the command line version: FORFILES /P %WINDIR%\servicing\Packages /M Microsoft-Windows-InternetExplorer-*11.*.mum /c “cmd /c echo Uninstalling package @fname && start /w pkgmgr /up:@fname /norestart /quiet”
Now this doesn’t make for a pretty scenario, basically the Windows 7 build goes back to IE8 and Windows Update was broken. After fixing Windows Update components and then updating to IE10 and finally IE11 all works as it should.
Quite a pain to fix this weird issue, but still satisfying. 🙂