Loading...

Follow Nedim's IT CORNER on Feedspot

Continue with Google
Continue with Facebook
or

Valid

The anti-malware scanning is performed after the connection filtering. A lot of malware comes from compromised home computers and other low reputation IP ranges, and that will be blocked already by the connection filter. So that’s saved a lot of processing resources, so the malware filtering doesn’t need to be performed on absolutely every mail that comes in with an attachment. EOP uses multiple antivirus engines to check email attachments for known viruses and malware. The antivirus engines are continually updated throughout the day, and the malware scanning also handles file type restrictions so you can restrict email attachments that are commonly used as carriers for malware like executable files and scripts, as well as any other file type you just don’t want to run the risk of accepting.

If malware is detected, the message is quarantined, but only so that an admin can review them and make a decision about whether to release them. Obviously, you don’t want to release actual malware, but if you’re doing attachment-type filtering, let’s say a software vendor sends you an executable file to install to fix a bug in some software, well then you can review that quarantine attachment, and then decide to release it if you need to. Most of the control you have over malware filtering, other than which attachment types to block, is in the notifications. You can send no notifications at all, or you can notify the recipient of the email that the email was quarantined. You can choose whether to notify internal or external senders that their email was quarantined and you can also customize those notifications with friendly messages to help your users understand what’s going on. So let’s take a look at the malware filter configuration.

We can configure malware filtering in either portal that’s available here in the Exchange admin center, and there’s also an Anti-malware configuration interface in the Security and Compliance Center.

Exchange Admin Center

Security and Compliance

Let’s stay in the Security and Compliance Center this time. When we click on the Anti-Malware, we can see that there’s one default malware filtering policy already in place, and we can create additional anti-malware policies as well. To create a new policy click on +.

Let’s go through the settings. First we need to give it a name.

Malware Detection Response:

  • NO –> Do you want to notify recipients if their messages are quarantined? Now this is quarantining malware, not spam, which is different. Should we notify the users that an email that was sent to them was quarantined by the malware filter. If you choose this option no notification will be sent.
  • YES AND USE THE DEFAULT NOTIFICATION TEXT –> A notification using the default notification text is sent to the recipient (we all know that system generated notifications are not usually very good)
  • YES AND USE THE CUSTOM NOTIFICATION TEXT –> By enabling the custom response text we can specify more details about what the end user needs to do when they receive this alert message. Wherever possible, it’s helpful to use custom notifications that do a better job of explaining to your users what just happened.

Scroll down to move to the next set of settings

Common Attachment Types Filter:

When you turn this on for the first time, there is a preconfigured set of file types that will be in this list. You can see that this list makes pretty good sense. It includes executable files, registry files, macro-enabled documents, stuff that’s pretty high-risk these days. You can add to the list if you need to, you can remove an extension from that list if you need to, although I don’t recommend it. If you find that your list is completely blank, it could be that someone has been in here before and removed everything from the list. EOP won’t automatically add anything back to this list for you in that case. You’ll need to rebuild the list from scratch by adding additional file types and choosing the ones that you want to block. Now when we turn this on, it triggers the Malware Detection Response, which is the first setting that we were looking at. It is a good idea to notify your users so they know what to do.

Notifications:

Do we want to notify the sender of the undelivered message? So if a message is blocked or quarantined by the malware filter, do we want to let the sender know that that has happened? I’d say yes for internal senders. That’s a good idea, letting them know that their outbound email was quarantined by the malware filter. For external senders, well, there’s the risk that the sender address has been spoofed, so we’d be sending a notification back to the address that may not be the real sender of the email. So let’s leave external senders off and just notify internal senders.

Administrator Notifications

We can also notify administrators when the malware filter blocks or quarantines an email. Let’s say that yes, we do want to notify our IT folks that that has happened, or at least a specific group of them, because that could be a sign of a compromised user or computer in the organization. But let’s not notify admins every time an external sender is blocked. If you really want to turn this on, by all means do, but in my experience it just generates way too many notifications.

Customize Notifications

One more notification to configure. If we’re notifying internal senders that their email was blocked by the malware filter, we want that to be a nice, friendly email that they understand.

Applied To:

You can scope policies so that they apply to specific recipients or recipients in a particular domain of our organization or members of a group. So those recipients that the policy is scoped to could have different malware settings applied. An example would be if you have some power users who should be allowed to receive executable files via email, but only those users. The rest of your organization should not be allowed to receive executable files, so that would be a case for configuring two separate malware filter policies and scoping them to those two different groups of users.

Once done, click on Save

That’s it. In the next post we will take a look at Mail Flow Rules which is our third EOP feature.

Cheers,

Nedim

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Exchange Online Protection is included with every Exchange Online plan or every Office 365 plan that includes Exchange Online licenses. EOP gives us connection filtering, which is filtering based on the IP address of the server out there on the internet that is making an SMTP connection to EOP to try and send email to you. Anti-malware, this is the familiar signature and heuristics-based antivirus protection that’s been around and evolving for a long time through a range of third-party products, as well as Microsoft’s own EOP and Forefront Online Protection services in the past. With Mail flow rules you can create your own filtering rules based on characteristics of messages, such as the domain of the sender or text patterns in the body of the message and content filtering, which is spam filtering based on analysis of the content of the email message itself. That’s just at a high level. There’s a lot of detail to those features, and we’ll get into that detail as we go through this and the future posts. When mail comes into your organization, Exchange Online Protection features process the mail in order. Connection filtering is first, followed by malware scanning; mail flow rules; content filtering, or spam filtering as it’s often referred to; and then finally the Advanced Threat Protection features of EOP are applied. We’re going to look at those ATP features in the future post.

Now there are two places where we can manage Exchange Online Protection settings. The first is the Exchange Admin Center, or EAC for short and the Security and Compliance Center, or SCC for short. If you browse to admin.microsoft.com and if you expand admin centers you will be able to find both EAC and SCC

Just to point that Microsoft is investing heavily and adding security and compliance-related features to the SCC, even going so far as to deprecate the management interfaces for some of those features from other admin portals. Everything in EOP can be configured in the SCC.

In the EAC, the EOP configuration is found in the protection area. You can access it directly from the dashboard or you can click on the Protection Link in the left pane.

The Security and Compliance Center has the EOP settings under Threat management in the Policy section (We will talk about safe links and safe attachments as well in the future post and you will find them here as well. The reason why there are missing is that I don’t have E5 license in my test tenant. I will take care of that later)

CONNECTION FILTERING IN EOP

Connection filtering comes first in the Exchange Online Protection processing of inbound mail flow. Why is it first? Because it’s one of the most effective, efficient, and lowest resource cost methods of preventing spam. If you can block spam by blocking that initial SMTP connection before the mail message itself is actually allowed to be transmitted to your server, you save a lot of time, you save a lot of network bandwidth, and you also don’t need to waste server resources doing analysis of any attached malware files or spam content. Connection filtering protects you at the very edge of Exchange Online Protection. It looks at the source IP address of the server that is sending the mail. Does that IP address come from a bad neighborhood, the kind of network ranges where known spammers live or that are used for ISP customers or an infected PC might be trying to send out a spam or malware campaign? IP-based filtering, like this, accounts for a very high level of blocked spam. It’s very effective. And you can also configure, custom, allow, and block lists for specific IP addresses such as the IP address of an application server you host in the cloud that needs to send mails to your staff. And if you want to be sure they’re not blocked by the spam filter, you would just add them to your IP Allow list.

If you put an IP in the Allow list or the Block list, there are two different outcomes.

ALLOW LIST

  • The message is scored with a Spam Confidence Level, or SCL, of -1. This means that the message is treated as non-spam from a trusted source and bypasses further spam content filtering.
  • The message is still checked for malware, and if you have Advanced Threat Protection features enabled, those scans are also still performed.

BLOCK LIST

  • The message is scored with an SCL of 9. This means the message is treated as high confidence spam. This is basically the spammiest rating for a message.
  • Even though it’s already found to be spam, the malware and ATP scans are still performed because your high confidence spam configuration might be to allow the mail through to the quarantine or to the Junk Mail folder where the user can still access it.

SPAM SCORING TABEL

-1 –> Non-spam from a trusted source

0-1 –> Non-Spam

5,6 –> Likley Spam

7,8,9 –> High confidence spam

Let’s see how we can configure this in Office 365. Login to Office 365 portal and click on Exchange

In Exchange Admin Center –> Dashboard –> Connection Filter (Another option is to click on Protection and Connection Filter)

There’s only one connection filter policy in your tenant, and we can’t create additional policies like we can with the malware and spam filters. The thing about connection filtering that is nice is that Microsoft is already doing it for you automatically. You don’t need to turn it on or configure specific IP reputation lists to use, but you can still do some customization of the connection filtering.

Click on the pen to edit the policy.

Once done, click on the Connection Filtering. Here we have posibility to add IP addresses to Allow or Block List.

Enable Safe List –> this is a list of senders that is sourced from various third parties, as well as Microsoft’s own data, and those senders are considered to be trusted. If you enable this option, these senders are never marked as spam. You can’t see this list. It’s a matter of trusting Microsoft to use all the data that’s at their disposal to make a judgement call for you. Microsoft recommend selecting this option because it should reduce the number of false positives (good mail that’s classified as spam) that you receive.

That’s it. This is the first feature and the first protection layer we have in EOP. Next post will focus on Malware Filtering which comes after Connection filtering.

Cheers,

Nedim

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

We continue our journey and in this part we will talk about Local Configuration Manager. The local configuration manager often refered to as LCM is really the core of PowerShell DSC, it runs on every server/computer that has PowerShell 4 and above installed on it.
It’s responsible for basically doing everything. This is a component that will take care of bringing your machine in it’s desired state and monitoring and configuration drift.
The LCM has information or stores information locally on the disk about what the current configuration is what the previous configuration was. So you can actually do a rollback. So if you start DSC configuration on one server, it will apply the configuration. Once that configuration has been applied, it’s gonna store it as the current MOF file on disk. And then if you apply another contribution on top of it, that current MOF file is gonna become your previous state and you’re gonna be able to roll-back to whatever your previous state was.
And you also have the third one which is gonna be pending so while the contribution is being applied you’re gonna have depending on MOF file store as well.

You may wonder where are those files being stored on the disk? They are stored under C:\Windows\System32\Configuration

We can see Current.mof and Previous.mof. I don’t have any pending.mof so that one is not displayed.

If you wanna go about and delete or let’s say, you configure your server and you wanna remove that configuration, you can either go delete that file or you can go and use the Remove-DSCConfigurationDocument cmdlet. When you call the Remove-DSCConfigurationDocument, you need to specify the stage property. So as we just discussed, we keep information about what the current MOF is, the pending and the previous. So if you call in the Remove-DSCConfigurationDocument you need to specify which one you want to remove.

Before we proceed let’s talk about the commands that we can use for LCM. We have 2 commands.

Get-DscLocalConfigurationManager -> This command gets LCM settings, or meta-configuration, and the state of LCM for the node.

Set-DscLocalConfigurationManager –> This command applies LCM settings, to nodes

If we run Get-DscLocalConfigurationManager you will notice that LCM has a handful of properties you can set on it.

ActionAfterReboot –> Specifies what happens after a reboot during the application of a configuration. The possible values are “ContinueConfiguration” and “StopConfiguration”.

  • ContinueConfiguration: Continue applying the current configuration after machine reboot. This is the default value
  • StopConfiguration: Stop the current configuration after machine reboot.

AllowModuleOverwrite : TRUE –> if new configurations downloaded from the pull service are allowed to overwrite the old ones on the target node. Otherwise, FALSE.

CertificateID : The thumbprint of a certificate used to secure credentials passed in a configuration. We will see this in action later.

ConfigurationID : This is a GUID that identifies the configuration file to get from a pull service. The node will pull configurations on the pull service if the name of the configuration MOF is named ConfigurationID.mof. So you will generate a random guid and rename your mof file with that guid. Next you will edit this property and add that GUID.

ConfigurationMode: specifies how that configuration or how is LCM gonna be monitoring the changes it does to machine? We have 3 modes:

  • ApplyOnly –> So that basically tells the LCM here’s what the Desired State Configuration should be. Go ahead, configure that server to match that configuration and when you’re done, that’s it. I’ll see you next time I need you. So this is one shot.
  • ApplyAndMonitor –> This is the default value that basically tells the LCM here’s how I want you to configure my server. Bring the server to the Desired State and once you’re done, on a regular basis just check to make sure that, that server is still in its Desired State and if it’s no longer in the Desired State just report in the event log.
  • ApplyAndAutoCorrect –> This is extremely powerful. This will configure the machine to be in the Desired State and if it detects (whenever it checks to see if the machine is still in the Desired State), that the machine is no longer in the Desired State is gonna do anything in its power to bring it back to that Desired State. So it will revert any changes made.

ConfigurationModeFrequencyMins –> How often, in minutes, the current configuration is checked and applied. This property is ignored if the ConfigurationMode property is set to ApplyOnly. The default value is 15.

StatusRetentionTimeInDays –> The number of days the LCM keeps the status of the current configuration.

RebootNodeIfNeeded –> Set this to TRUE to automatically reboot the node after a configuration that requires reboot is applied. Otherwise, you will have to manually reboot the node for any configuration that requires it.

RefreshMode –> This basically allows you to define how the configuration is gonna be applied to a server, how the MOF file is gonna get to your server. We have three
refresh modes : Disabled, Push and Pull

  • Disabled -> The LCM is not doing any work, basically you have a new server that has partial install on it, and there’s nothing going on with DSC on that server.
  • Push –> We already used this method in our first post. This means that you are going to create a MOF file and then push that configuration to the servers.
  • Pull –> This is gonna allow you to register your server against a centralized repository where all your configuration are gonna be stored and all your modules are also gonna be available for those registered servers.

RefreshFrequencyMins: The time interval, in minutes, at which the LCM checks a pull service to get updated configurations. This value is ignored if the LCM is not configured in pull mode. The default value is 30.

[DSCLocalConfigurationManager()] Explanation

Once we know that let’s configure our LCM. We start with this keyword right at the very top, DSCLocalConfigurationManager. Through the configuration keyword you’re defining a name of something that gets generated. Now if you remember from the previous post when we did a configuration that install IIS, what you do when you see all the details, you’ll see that the configuration that gets generated is some node name .MOF. It was DC01.mof. Now that configures the machine. What we want to do is we want to configure the LCM agent on that machine. One way to think about this is what we call meta configuration. You configure the configuration management agent. You’ll see when we run this, that tag at the top saying (DSCLocalConfigurationManager) when you run this name, instead of generating a MOF, generate a metadata.MOF. Then that’s pushed through a different mechanism. MOFs are generated and pushed through Start-DSCConfiguration. Metadata.mofs are pushed through Set-DSCLocalConfigurationManager. We can use IntelliSense features to show us how to make the configuration.

So I will change RebootNodeIfNeeded and the Configuration Mode

When we run this it will generate meta.mof file.

To apply this to our machine we need to use Set-DscLocalConfigurationManager command.

Set-DscLocalConfigurationManager -Path ‘C:\’ -ComputerName dc01 -Verbose

An LCM configuration can contain blocks only for a limited set of resources. In the previous example, the only resource called is Settings. The other available resources are:

ConfigurationRepositoryWeb: specifies an HTTP pull service for configurations.
ConfigurationRepositoryShare: specifies an SMB share for configurations.
ResourceRepositoryWeb : specifies an HTTP pull service for modules.
ResourceRepositoryShare: specifies an SMB share for modules.
ReportServerWeb: specifies an HTTP pull service to which reports are sent.
PartialConfiguration: provides data to enable partial configurations.

We will check some of these when we will be talking about Pull server. That’s it. In our next post we will start to configure our servers and in future post we will see how to configure pull server.

Stay Tuned!

Cheers,

Nedim

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

PowerShell DSC is a management platform in Windows PowerShell that allows you to define the current state of a machines and ensure that machines are always in that state and ideally preventing them from drifting from that ideal state.

PowerShell DSC provides a set of Windows PowerShell language extensions, cmdlets and resources that you can use to declaratively specify how you want your operating system and software environment to be configured. It supports both an interactive push model, where configurations are executed on target nodes on demand, and a pull model, where a central server manages and distributes configurations.

Something that is really cool with DSC is that domain membership is not required. You can configure both domain computers and those that are in a workgroup. We can push configs to workgroup computers, join them to a domain etc. We will cover all of this step-by-step.

DSC configuration script?

A PowerShell Desired State Configuration script is nothing more than a PS1 script that we’re all used to having in our environment. However, it defines a special keyword called Configuration. That Configuration is where you’re gonna go and define what
the desired state for your environment is gonna be. In there, you’re gonna go and define different components and what their ideal state should be. Once you call that PS1 file, you’re gonna be compiling it into what we refer to as a mof file.

For now, it is sufficient to say DSC is comprised of both a data file and configuration file that are translated into a text file following the Managed Object Format (MOF). This is a plain-text file that contains your configuration specifications. Then this file is parsed and executed on the target server, using DSC features that know how to configure the system. We have 2 mof files, one holds configuration that we will push/pull to the servers and the second holds local configuration manager configuration.

MOF file contains the machine-readable version of a DSC configuration file.

META.MOF has the local configuration manager configuration

DSC REQUIREMENTS

To use DSC, both the computer you author the configuration files on and the target computers must have PowerShell 4 or higher installed. This means that at least WMF 4 is installed on all target hosts and the computer in which you are making your configuration files. So if your environment has at least PowerShell 4 or PowerShell
5, 5.1, you’re allowed to use Desired State Configuration. Make sure if  for some reason PS Remoting is disabled to enable it (It is enabled by default on server 2012 and above) and check your execution policy.

LOCAL CONFIGURATION MANAGER

The Local Configuration Manager (LCM) is the PowerShell DSC engine. It is the heart and soul of DSC. It runs on all target nodes (It is built into the OS) and controls the execution of DSC configurations and DSC resources whether you are using a push or pull deployment model. It is a Windows service that is part of the WMI service host, so there is no direct service named LCM for you to look at. This is just a short info, we will explore LCM in the next post.

DSC RESOURCES

Desired State Configuration (DSC) Resources provide the building blocks for a DSC configuration. A resource exposes properties that can be configured (schema) and contains the PowerShell script functions that the Local Configuration Manager (LCM) calls to “make it so”.

A resource can model something as generic as a file or as specific as an IIS server setting. Groups of resources are combined in to a DSC Module, which organizes all the required files in to a structure that is portable and includes metadata to identify how the resources are intended to be used. So we will use these resources for our config that will do work for us.

Built-In DSC resources

So you’re going to need resources, and those resources are available on your system. There is a cmdlet that will show you the DSC resources called Get-DSCResources, and these are in the box what shipped. Remember these are the things you’re going to use in your configuration that will do the work for you. Only 23 of them, things like file and registry and environment.

Now to find all dsc resources on the internet we need to use Find-DSCResource command. Right now there are 1428 dsc resources out there that we can use.

Now when you scroll down through the list you will notice that many of these resources  begin with a C and with an X.

X –> stands for experimental and may not work 100 % in all cases. They’re written by a wide range of people, typically Microsoft. This is when Microsoft asking for feedback and these resources may change.

C –> stands for community resources. These are resources that Microsoft might have
an X version out there, and the community says, hey, we want to change that.

We will use many of these through the DSC posts so be patient. Let’s start to write our configuration and see how we can use this tool to make our life easier. First we will use Push method. In other post we will discuss about Pull method.

DSC Structure

First we are going to put in a keyword called configuration. Next we need to give it a name (Basically you want to think of it exactly like function. It defines a name that then can be invoked and it runs code). The difference between a function and a configuration is that a config runs code whose, job it is to produce a configuration which is to say a configuration document. But then there are special keywords, these resources that you’ll declare. When you declare them PowerShell, underneath the covers, collects them all and organizes them. Then when you’re done you output them to a series of files. These configuration files are then pushed to the servers.

Next keyword is Node. Here we specify the machine that we would like to send the config. (Later we will discuss how we can parametarize this)

Third keyword is what Resource we want to use. In this example I will use WindowsFeature. So the basic structure looks like this

You may ask yourself what shall we type in under the WindowsFeature? Everytime you need to know something about the resource (the structure) or how to use it you can type in Get-DscResource and then -syntax. This shows me the things that I can set using Windows features. The properties that are available. It also tells me whether they’re mandatory. It will give us code sample. You can just copy and paste this, and fill it in or pipe it to clip or you can use IntelliSense in ISE.

So with this our config would look like this. We would write Configuration and give it a name. Next comes Node so whatever nodes we wanted it to go to. I’m going to send it to DC01 and then comes WindowsFeature resource (Hit Ctrl + space in ISE and it will show you the syntax). Name is the name of the Windows feature that you want to install. You get the name by running Get-WindowsFeature. You will notice that I have Ensure = ‘Present’.

ENSURE EXPLANATION

We have 2 options here (Present and Absent). Present is the default option. You may ask yourself what ensure means? It is here because you’re not performing an operation. You’re not saying add Windows feature. Because if you add it and then you try and add it again, you will get an error. My desire is to have that feature installed. If it’s not there put it there, if it is there don’t touch it.

When we run this, it will create a MOF file (It contains the machine-readable version of a DSC configuration file). You will notice a warning message but don’t worried about it right now. We will take care of that later.

Before we push this config to our server, let’s explore the MOF file. Just to point that you do not need to know how to write or read a MOF, I’ll just bring up the MOF  that we just generated so that you see how it looks like. Browse to the file and open it with notepad.

So we can see here where it’s going (TargetNode). GeneratedBY user N, when did it get generated, where did it get generated and some additional info. You do not need to explore this or to know how to generate this.

Close the file and let’s push it to our server. Now before doing this let’s see what commands we can use. In Powershell type in Get-Help -DSC This will give us the cmdlets that we’re going to be working with. We’re going to be going through several of these through our DSC posts but for now command which we are interested in is Start-DscConfiguration. 

Start-DscConfiguration is a way to send (push) a config out to a box. We’re going to give it the path for where the MOF is located and then we’re going to tell this to wait and then be verbose.

-wait –> If we run Start-DscConfiguration without -wait it’s going to create a PowerShell background job. By putting in -wait it’s going to do it right in front of us in real time. In other posts we will run it as a background job as well so you see how that looks like and what you can do.

-verbose -> it will give us additional info as it runs

Start-DscConfiguration -Path ‘C:\’ -ComputerName DC01 -Wait -Verbose

VERBOSE OUTPUT EXPLANATION

When you first start using desired state configuration what you’ll do is you’ll read the verbose output. I encourage you to do that. It’ll give you a clue what the engine is doing and why it’s doing it etc. You’ll see that it’s extremely regularly structured in terms of like lining up on columns. So let’s break this in to small peaces so that you see what is happening. Key point is to understand how DSC works. First it’s testing if the..

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

When you deploy images with WDS you need to go through windows installaton by choosing which image you are going to use, you need to accept license etc. which is very time consuming if you need to deploy same image to many machines. In this post we will go one step further and perform a clean installation without any user interaction. To make this work we need to create a special file with the answers to those question (like accepting AULA, choosing partitions, region and language etc) which you can save in the bootable media and the setup can read automatically to perform an unattended installation of the OS.

Before we continue you will need to download Windows Assessment and Development Kit (ADK), and you will need to have Windows 10 install media and the test machine. I will perform this on Windows 10 1809 so I downloaded ADK for that version of Windows. In production environment you would install OS, update it, install apps and perform all customizations and then your would sysprep it. Next you would create a capture image which you would deploy to the clients.

Start the ADK installation and clear all pre-selected items. Only thing that you need to select is Deployment Tools. Click Install

SIM INTRO

Once done, start the Windows System Image Manager

  • Distribution Share Pane –> All of your deployment files and folders will be stored here
  • Answer File Pane –> Here will be answer file when you create a new one
  • Properties  Pane –> Here will be all the properties that we will configure
  • Windows Image Pane –> Here you begin and choose the install image
  • Message Pane –> here we will get information about validating our answer file to make sure it is going to work before we go and try it out.

Let’s start by right-clicking on Select a windows image or catalog file and choose Select Windows Image

Navigate to the folder you exported the Windows 10 installation files and inside the sources folder, select the install.wim image file, and click the Open button.

When you click Open you will get a message to create a Catalog file. This is a file that contains all the settings and all the properties that you can put into an answer file and configure. Click YES to continue. It will take some time so be patient. The .clg file will be saved in the same location where the install.wim is stored.

Once done we will see 2 folders created.

Before expanding those let’s right click on the Create or open an answer file and click New Answer File

Once done we will see the different stages that the Windows Installation goes through. As you can see all of these stages have a special name. They are called configuration passes. Even if you see 7 you don’t need to configure all of them.

  1. Windows PE –> Here we begin the installation, we are configuring the language, disk partition etc.
  2. Offline Servicing –> Here we can patch our images (.wim files) offline
  3. Generalize –> This is the pass that store info for our SYSPREP generalize settings
  4. Specialize –> We always use this one. This configuration pass is used to create and configure information in the Windows image, and is specific to the hardware that the Windows image is installing to.
  5.  Audit System and Audit User –>  allows us to setup the machine so that we can boot back in one time after installation was completed. all drivers installed etc. so you can check that everything is find and working properly. The next time someone starts that machine then he will get sysprep menu wizard.
  6. Oobe System –> Out-of-the-box experience

Passes that are allways used is 1, 4 and 7.

Now to get all of these populated we need to expand Components Pane and right-click on some component and select where to add it. Some components can be added to multiple passes and some of them can be added to only one.

Example (Under Windows Image –> Expand components and right-click on one.

Once the setting has been added in answer file then we can highlight that component and then in the properties we can configure the properties of that component.

This is a very complex tool so it will take time for you to explore everything. You will need to do a lot of testing to see what this tool can do. After short intro let’s built our answer files. We will need 2 answer files to make this work.

CREATING ANSWER FILES

Under Windows Image Pane, expand Components and right-click on amd64_Microsoft-Windows-International-Core-WinPE_10.0.10586.0_neutral and select Add settings to Pass 1 windowsPE

Configure the language settings and expand the component.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Before we deploy images to our machines, let’s talk about images and the difference between them. If you click on Boot Images and if you right-click on your boot image you will see Capture Image and Discover Image options.

We will start with Discover Image

DISCOVER IMAGE

In certain circumstances, I may have a situation where I may have some clients that are unable to communicate over PXE, maybe client does not support PXE or you are sending those images over a very slow link or where you’ve got a high security environment. It can happen when your network team doesn’t like that traffic passing over the network. When that’s the case, you can actually burn a copy of this boot image to a DVD or burn it to a USB stick by creating what is called a Discover Image. This Discover Image, the whole job of this is just simply to get that bootstrapping information, that WinPE content, onto that machine so that it can boot up and then use your usual Windows protocols to communicate back to our WDS server and download the install image to be deployed.

To create a Discover Image click on Boot Images –> Right-click on your boot image and select Create Descover Image

Give it a name and description, then specify the location where you want to save it to. If needed you can specify the WDS server. Click Next

The last step is to save this to example USB drive and boot the computer to the media.

CAPTURE IMAGE

In other circumstances, you may not necessarily be interested in the actual install image right off the USB. You may want to configure a custom image that includes applications, different configurations that are specific to the needs of that user or group. Well, in that case, you can actually go about creating that machine as a golden image or a reference image, and then capture the contents of that image through what is called a Capture Image. Before creating capture image make sure that you have OS deployed with all apps and customizations you need. Then run sysprep on the machine you would like to use for golden image with OOBE, Generalize and Shutdown options. I imported Windows 10 boot image this time. To create a Capture Image, click on Boot Images –> Right-click on your boot image and select Create Capture Image

Give it a name, description and the location where you want to save it. Click Next

After a few minutes we will be able to see it under the Boot Images. Click the Add image to the WDS now

Follow the steps and here it is

Now change the boot order to boot from the network and start the sysprepped VM. Select the Capture Image we created.

On the WDS Image Capture Wizard click next. On the Directory to capture select the volume and give your image a name and the description

New Image Location –> Enter the name and location, check Upload image to a WDS server and connect to your WDS. (Just to point that I went to WDS server and created Windows 10 Image Group)

After the completion of capturing install image click on Finish to close the wizard.

Once done, go back to your WDS and refresh the View.

Now if we boot client again we will have option to use the new golden image. Select the first option

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

We know that the big selling points of Windows Deployment Services or WDS is that number one it doesn’t cost extra money, it’s included in Windows Server, but number two it provides us with PXE boot and multicast technology. So WDS fits into any self respecting operating system or server deployment infrastructure. In this and in the next post we’ll talk about initial configuration, as well as the PXE stuff that you have to set up, that connects WDS with your DHCP services as well. We’ll then talk about configuring and managing boot, install and discover images. These are the different kinds of images that WDS could potentially use for booting a machine, for installing an operating system.

WDS is kind of the combination of a couple of different solutions all into one. We have to have WDS itself because that’s the tool where the images are stored. That’s the tool where the images are delivered from. But it’s also a way to get machines connected up to WDS and bootstrapped into that WinPE environment. So a large portion of this initial configuration is not only getting WDS up and running, getting the different kinds of images pulled into WDS, but also the connection over to our DHCP services as well so make sure that you have DHCP server up and running as well.

INFRASTRUCTURE

DC01 –> Domain Controller and DHCP

WDS –> WDS Server (Domain joined)

Our first step in deploying WDS is to go and install WDS role. You can use Server Manager to install it or you can run powershell as admin and type in

Install-WindowsFeature WDS -IncludeAllSubFeature -IncludeManagementTools

Once done type in WDSMGMT to open windows deployment services. (You can access it through Server Manager as well. Click on Tools –> Windows Deployment Services)

Our next step is to click on Servers and then in drop down, right-click on your server  and click on Configure Server.

On Before you begin page click Next.

On the Install Options page we have to identify what kind of installation we want to do for this instance of WDS. Now, in a lot of environments where you’re deploying machines, and those machines are going to be on the same active directory domain as the WDS server, well, you can use integrated with active directory here. But there is also option to create a stand-alone server, which does not necessarily preclude you from automatically inserting these machines you’ll be provisioning into your active directory domain. Now, more often than not, a lot of people like to create a separate server that’s perhaps even on a separate network. A lot of times, the network team prefers to keep the kinds of multicast traffic that WDS can use for that activity away from the rest of the network. It can kind of oversaturate the network if it’s configured in the wrong way. And so, for that reason, you’ll see a lot of situations where stand-alone servers are configured that are disconnected from active directory. But later on, as we go through the configuration of these machines, you can tell the machine to connect to your active directory once it’s provisioned but for the sake of simplicity we’ll go through an integrated with active directory configuration here. It kind of makes a few items just a little bit simpler. Click Next

On Remote Installation Folder Location, here we can configure path to our remote installation folder. Now, I’ll put this on the C drive, but this can contain a lot of content, because you’re going to be putting operating system images on it, so make sure you have plenty of space in whatever this location is. You’ll get a little error message if you try to choose the system volume as opposed to a different volume.

Now, here we have the next option of determining how we want our PXE server to respond to incoming clients as they may attempt to request an image from our WDS solution. By default, we won’t actually respond to any client computers, or I could choose to respond to known or known and unknown client computers. Remember that any machine where in the BIOS, you’ve configured the network boot higher up in the boot order, any of those machines will attempt to connect to your PXE server first before any of the other boot options that exist. And so you have to kind of be careful with this configuration here. I tend to set mine to known and unknown but require administrator approval for the unknown computers. Once done click next

When you click next, it will then go through the initial configuration of Windows Deployment Services. We still have quite a few other configurations we’ve got to do for WDS to even function. The next step is to actually identify what kind of images we want to import here onto this server. For our purposes, there are actually two different kinds of images that we’re interested in, the first of which will be a boot image. This boot image will be that little micro version of windows, the Windows PE instance, that bootstraps the client to the point where it can then request the install image, which is the second type of image we’re going to import. These boot and install images are available on the windows iso. What I did is I unpacked the iso and copied it to C:\Windows Server 2016 Images\Unpacked. You will find boot.wim and install.wim in sources folder. (When you install windows server, in most cases, you will have 4 options (both full and core standard and datacenter edition. I removed all except standard core so I will only have 1 option to install)).

Click Finish

Here we need to provide the path to location where those 2 .wim files are

Image Group page, I’m going to create an image group called Windows Server 2016 Standard Core and choose Next.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

In the previous post we talked about custom domains, cors, soft delete, shared access signature, file services etc.. We will start this post by talking about Azure CDN or Azure Content Delivery Network.

So far we have been working with the storage account that is created and managed in the West Europe region in Azure and if we are serving images or streaming video content out of that location that is going to be a great experience for people in that region but what if we have users on the other side of the world that need to access content in our storage account. That is where the Azure CDN comes into play.

Let’s say that we have a user up on the top left, he is in US, and he wants to stream some video out of our storage account (nmazuretraining.blob.core.windows.net) that is built in West Europe datacenter. Now accessing that content from the other side of the world isn’t gonna be the greatest experience, so what we can do is we can build an Azure CDN Profile and we can integrate our storage account with the Azure CDN. Idea with CDN is, if you look in the middle of the image below, we have edge servers, so we basically take the content from our storage account and replicate that to Edge Servers in the content delivery network (CDN) that are all over the world and that gives people in other countries fast connectivity to a local server that has a copy of that data.

So to understand this we have a couple of steps:

Arrow 1 : User1 wants to access some content in our storage account, when we integrate our storage account with CDN, that user will actually go to a different endpoint (example nmazuretraining.azureedge.net) something that we would provision, and when User1 hit that infrastructure he would get routed to the closest Edge Server (probably a server in US).

Arrow 2: If that edge server didn’t have the content that user is looking for, edge server will go over to our storage account (Origin Server can be an Azure Web App, Azure Cloud Service, Azure Storage account, or any publicly accessible web server.)

Arrow 3: and our Origin (being our storage account) will send that contant to the edge and then we will have a copy of that data there. Now that data will have a time to live configuration or how long it’ll actually be cached on those Edge Servers (we will see how to configure that a little bit later). If we don’t specify a TTL, the default TTL is 7 days.

Arrow 4: In this step Edge server will cache the file and return the file to the original requestor.

Regarding arrows 5 and 6: the cool thing about this is, if we have another user down at the bottom left that is also in US, once he go to access that same asset out of our storage account, he will be routed to a local data center where he will have a low latency connection and a local copy of that data. So in addition to this first benefit of making the data closer to the user, we’ve also got the ability to kind of scale out requests across a bunch of different Edge Servers, and this is especially useful when you have a global user base, and we can take, you know, all of that load off that single origin and kind of scale it out and distribute it to multiple Edge Servers all around the world.

CDN Limitations:

Each Azure subscription has default limits for the following resources:

  • The number of CDN profiles that can be created.
  • The number of endpoints that can be created in a CDN profile.
  • The number of custom domains that can be mapped to an endpoint.

Let’s see how we can create CDN

Click on your storage account –> Blob Services –> Azure CDN

First thing we have to do is to create new CDN Profile. This name must be globally unique. I will use NMCDN

Origin Hostname: This is the name of your server from where the CDN endpoint will pull the content. Default option is the name of your storage account. Once done click Create.

Pricing Tier –> Here we have a couple of different tiers and different providers that you need to work with. So Microsoft partners with Verizon and Akamai to distribute the content around the world to all these different data centers and as you can see, there’s a Microsoft Standard, Standard Verizon, a Standard Akamai, and then there’s a premium Verizon. So depending on the actual plan you go with here, that’s of course going to dictate how much you pay per gigabyte up to 10 terabytes.

The big difference between the Standard Verizon and the Standard Akamai is that it can take the Standard Verizon up to 90 minutes before it’ll start caching your content, so you will be waiting around for awhile when you first provision that, while Akamai takes minutes to replicate. For testing purposes I will select Akamai.

After the endpoint is created, it appears in the list of endpoints for the profile.

When we click on it we will be able to manage CDN and configure different features.

ORIGIN

Origin Type: If we need to change origin type, we can do it here. We have 4 option: Storage for Azure Storage, Cloud service for Azure Cloud Services, Web App for Azure Web Apps, Custom origin

Origin Path: Here we can enter the path to the resources that you want to cache, example /My Images. To allow caching of any resource at the domain leave this setting blank.

Custom Domains –> Here we can configure our custom domains, so instead of using this endpoint, nmazuretraining.azureedge.net., we can actually set up something like cdn.company.com by setting up a custom domain here.

Compression –> This will allow us to compress files and by doing this it will improve file transfer speed and increase page-load performance by reducing a file’s size before it is sent from the server. It is enabled by default.

Geo-Filtering –> When a user requests your content, by default, the content is served regardless of the location of the user making the request. However, in some cases, you may want to restrict access to your content by country. With the geo-filtering feature, you can create rules on specific paths on your CDN endpoint to allow or block content in selected countries.

Optimization –> Here we have few option for delivery secenarios.

  • General web delivery is the most common optimization option. It’s designed for general web content optimization, such as webpages and web applications. This optimization also can be used for file and video downloads.
  • General media streaming is used for live streaming
  • Video-on-demand media streaming optimization improves video-on-demand streaming content. The major difference between this optimization type and the general media streaming optimization type is the connection retry time-out. The time-out is much shorter to work with live streaming scenarios.
  • Large file download is optimized for content larger than 10 MB. If your average file size is smaller than 10 MB, use general web delivery
  • Dynamic site acceleration this optimization involves an additional fee to use. You can use this optimization to accelerate a web app that includes numerous responses that aren’t cacheable. Examples are search results, checkout transactions, or real-time data.

MONITORING

Let’s start this section by talking about Activity Log. The Azure Activity Log is considered a control pane or management log. So its purpose is to provide insight into the operations performed on the resource itself, in this case the storage account. So this includes things like modifying user roles assignments to the storage account, regenerating storage account keys, changing settings, like requiring HTTPS to access the data pane, or modifying tags attached to the storage account. It doesn’t include logging of activities that happen against the data pane, such as uploading or deleting blobs, tables, or queues. Those are considered diagnostic..

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

DKIM (Domainkeys Identified Mail) is another scheme like SPF that aims to prevent spoofing of your domain name by spammers. Like SPF, this not help only with emails from your domain to other organizations, but also inbound emails to your users when attackers are trying to impersonate your domain. DKIM is implemented as a digital signature in the header of email messages. There’s some DNS involved in this, as well as asymmetric cryptography. DKIM is not mandatory, but it certainly recommended.

Office 365 uses the tenant domain to manage the DKIM signing. So that is the unique onmicrosoft. com domain that each tenant chooses at signup. They use the tenant domain, because they control the DNS for that domain, which is crucial to the DKIM signing process. Microsoft recommends that we enable this for custom domains. DKIM is enabled by default on .onmicrosoft.com

Requirements for DKIM

We need to create 2 CNAME records in our public DNS zone and once we have them in place we can enable DKIM.

Let’s go through the steps for configuring DKIM. Before you login to your external dns and create those 2 CNAME records you will have to know what value you need to type in. To discover that, login to your Office 365 tenant, click on Admin Centers –> Security & Compliance

Click on Threat Management –> Policy –> DKIM

Click on the custom domain where you want to enable DKIM and click on ENABLE. Once done, you will receive a warning message which will tell you that those 2 CNAME records are not created. There you will see the info about what to type or how to create those CNAME records.

Selector: The selector is the DNS record that will be queried for the public key to verify the digital signature (selector1-domain-com)

_domainkey: protocol

Domain: domain.onmicrosoft.com

The selector and domain were provided in the DKIM signature. The _domainkey is a standard part of the DKIM protocol. The three pieces of information are joined together to represent the DNS name that will be queried

selector1-domain-com._domainkey.tenantdomain.onmicrosoft.com

Once you know that you will then login to your external dns and create those 2 CNAME records. The reason we configure two selectors for DKIM signing in DNS is because Exchange Online is going to rotate between the two, which allows it to expire old keys, and enable the use of new keys on a regular basis. It rotates between them approximately every week. How would you configure those CNAME records?

Nameselector1._domainkey

Type: CNAME

Value: Here you specify that whole thing you see in the warning message so for the first record you will type in selector1-domain-com._domainkey.tenantdomain.onmicrosoft.com

Do the same for the second record. Once done, go back to your office 365 portal and click on the enable. If you don’t see that the DKIM is enabled right away, keep in mind that it may take a few minutes.

Now that DKIM signing is in place with a custom domain, we can implement DMARC as well.

NO DKIM KEYS SAVED FOR THIS DOMAIN

It may happen that you click on your custom domain and you see Status: No DKIM keys saved for this domain and with no Enable button. Even if you connect to Exchange online with powershell and run Get-DkimSigningConfig you will not be able to see that domain in the list.

To resolve this you will need to connect to Exchange online with powershell and run New-DkimSigningConfig -domain <your custom domain> -Enabled $True

Once done, you will be able to enable DKIM on your custom domain.

DMARC (Domain-based Message Authentication, Reporting, and Conformance)

DMARC allows domain owners to publish a policy that advises receiving servers on what they should do with emails that fail SPF and DKIM checks. If those two mechanisms are failing, you specify what happens next by publishing a DMARC policy. DMARC is implemented as a DNS TXT record, so like SPF and DKIM, this is fairly simple to implement through DNS. A nice feature of DMARC is the reporting aspect, which allows you to receive reports from receiving servers. I should also point out that you can’t implement DMARC until you’ve implemented SPF, as well as DKIM for your custom domain in Office 365.

So why is this DMARC stuff important and effective at preventing spoofing?

The secret sauce, so to speak, is in the way two different addresses work for email messages.

MAIL FROM

The first address is the Mail From address, also known as the 5321 MailFrom address. This address identifies the sender of the email. If it’s sent by a person using their email client to send from their mailbox, it will just match their email address, but if it’s sent from an application or from a bulk mailing system, you’ll see a strange-looking email address there instead. MailFrom address is where any return notices are sent. A good example is non-delivery reports. MailFrom is present in the message headers, but it’s usually not displayed at all to the recipient of the email.

FROM

The other address is the From address. This is known as the 5322. From address, and this address specifies the author of the email, who actually wrote it, and who actually sent it, even if it’s a marketing email sent from a bulk email system, companies will usually use a friendly, real person’s name as the From address, or they’ll use a do not reply address. Either way, this is the email address that will be displayed in most email clients as the From address or the sender of the email.

So let’s see through an example why DMARC is important and effective.

Spoofed email example

MAIL FROM: accounts@nastyhackers.com

FROM: accounts@swedbank.se

TO: finance@mehic.se

Subject: Urgent action required to verify your account.

It uses the Mail From address of accounts@nastyhackers.com but the From Address of accounts@swedbank.se, that itself is not an immediate red flag. Remember the Mail From address can be different and usually is different to the From Address. So we need to leverage the anti-spoofing schemes (SPF, DKIM, DMARC)

SPF: The SPF result for this email could very well be passed, because SPF checks are usually performed only against the Mail From address, and there’s nothing stopping these nastyhackers from adding a valid SPF record to their domain, so those emails pass the SPF checks.

DKIM: Emails can pass the DKIM check as well, because nastyhackers can sign it using their own domain. They can do that, and it’s the same type of DKIM scenario as when Microsoft is DKIM signing your outbound email from Office 365, before you enable DKIM for your custom domains.

DMARC: What the attacker can’t do is pass a DMARC check, because that absolutely relies on the address in the From field where they are spoofing someone else’s domain and the reason they can’t do that is because you control the DNS zone for your domain, so you control the DMARC policy. The phisher can only beat DMARC if they already have control of your domain name DNS.

So how we can enable this. Once we have SPF, DKIM in place we need to publish DMARC policy in DNS and this is a TXT record.

DMARC TAG OPTIONS

DMARC tags are the language of the DMARC standard. They tell the email receiver to check for DMARC and (2) what to do with messages that fail DMARC authentication. I will focus only on these tags.

v = This is the version tag that identifies the record retrieved as a DMARC record. It’s value must be DMARC1 and be listed first in the DMARC record

= This indicates the requested policy you wish mailbox providers to apply when your email fails DMARC. Options are none, reject , quarantine

  • none – means “take no action, just collect data and send me reports”
  • quarantine – means “treat with suspicion”
  • reject – mean “block outright”.

pct = The percentage of messages to which the DMARC policy is to be applied

rua = This is a tag that lets mailbox providers know where you want aggregate reports to be sent. Aggregate reports provide visibility into the health of your email program by helping to identify potential authentication issues or malicious activity.

ruf = This tag lets mailbox providers know where you want your forensic (message-level) reports to be sent.

On your custom domain external DNS create new TXT record and for values type in

Name: _dmarc

Type: TXT

Value: v=DMARC1; p=none; pct=100; rua=mailto:support@domain.com; ruf=mailto:support@domain.com

You can verify this by going to ex https://dmarcian.com/dmarc-inspector/

Warnings / Errors

You may receive this warning message when you run DMARC lookup

Missing authorization for External Destination.

What does this mean, is that domain that is going to send reports (your source domain) don’t have permission to send reports to the destination domain.

Let’s say that we have a domain named nm.com that want to send reports to mehic.com. Mehic.com must have TXT record which will allow nm.com to send reports to it.

So we need to create a TXT record on the destination DNS (mehic.com) with these values

Name: nm.com._report._dmarc

Type: TXT

Value: DMARC1

We can use wildcard as well to accept DMARC reports from any domain.

*._report._dmarc

That’s it.

Cheers,

Nedim

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Azure storage evolved from the very first iterations of Azure, when it was simply platform-as-a-service, so you could come-in and run workloads like websites and databases and today you can run much more including infrastructure virtual machines, but when it began it was more for running platform applications, applications such as websites and databases. Azure storage grows as you need it and it grows instantly, you can use this for an extremely small amount of data that access very infrequently and very slowly or to extremely large, extremely demanding high-performance workloads that run on SSD storage and have hundreds of thousands of IOPS. You don’t have to think about the underlying implementation of storage all you need to do is define storage and azure takes care of the rest.
You don’t have to worry about things like the load-balancing, you don’t have to worry about things like redundancy, about making sure that data is written in multiple locations in fact within Azure before a piece of data is considered written, it’s got to be stored in at least three individual locations. So when you write a file before Azure returns and says, success the file is written, it’s been stored in at least three locations. Azure storage is designed to support pretty much any workload or any application that you have.

There are two tiers of storage: Standard and Premium.

Standard

Standard storage is what most applications will use and it’s cheaper and a bit slower. You pay for what you consume. If you have 1 TB VHD and if you have only 100 MB data on it you will pay only for those 100 MB not for the whole size. It is not the disk size what it counts it is the data you have on it (Standard disk cost per transacation and per GB). Standard storage performance tier allows you to store Tables, Queues, Files, Blobs and Azure virtual machine disks.

Premium

Premium disk aren’t charged by transaction. It’s more of a flat fee model. Premium
storage is based on SSD storage as opposed to hard drive storage we have in standard tier. You have to do some calculus on one hand keeping costs in mind, and in the other hand your need for IOPS, input output operations per second. How fast do you need the storage sub-system to be for your IaaS VM workloads? If they’re just doing relatively low horsepower tasks, maybe serving DNS or maybe some light IIS web, you may be fine with standard storage. By contrast, if you’re doing a lot of random IO and you’re hosting, let’s say, IaaS based SQL servers or MySQL database servers, then you may want to look at premium for them. Here we get charged for the disk size and not data written. Premium storage performance tier only supports Azure virtual machine disks.

WHEN TO USE STANDARD STORAGE AND WHEN PREMIUM ????

Well standard storage is when you’ve got more than one VM doing the same job. So again, you could have two domain controllers on standard storage in an availability set. Then you’re getting a great SLA, and they’re going to work quite well or multiple webservers. So domain controllers are good. Maybe remote desktop brokers, not very busy. Some web servers. But standard storage just does not cut it when you’ve got disk-intensive applications. So busy file servers with many, many users are not going to work quite well. SQL databases outside of dev and test just really won’t work well without performance, without premium disks. SharePoint servers, forget about it. They’re just really going to need premium disks. And of course, remote desktop session hosts, as we discussed earlier on, many, many users accessing a single operating system with multiple read and writes to a disk are definitely going to need premium storage. So application and database workloads really do need that premium storage.

Storage Account SLA

Another important peace when configuring storage accounts. I included a link where you can find all info regarding SLA so please read it.

STORAGE ACCOUNT SLA

Let’s see how we can create storage accounts and let’s explore storage account properties.

Login to Azure Portal and click on the Storage Account Blade in the left pane

(If you don’t see storage account blade in the left pane, click on All services and type in storage account. Click on the star to add it in the favorites.)

Click on the + Add

Let’s first focus on the Basics Tab

Select you subscription and the resource group. If you don’t have one you can click on Create new. Next give your storage account a name and select location. We already discussed about Standard vs Premium storage. I will choose Standard for this example.

Now, under Account Kind we have 3 options to choose.

Storage V2 (General Purpose V2) –> these storage accounts support all of the latest features for blobs, files, queues, and tables. GPv2 accounts support all APIs and features supported in GPv1 and Blob storage accounts. They also support the same durability, availability, scalability, and performance features in those account types. Pricing for GPv2 accounts has been designed to deliver the lowest per gigabyte prices, and industry competitive transaction prices. General-purpose v2 accounts are recommended for most storage scenarios so I will not focus on V1.

BLOB STORAGE –> they support all the same block blob features as GPv2, but are limited to supporting only block blobs and append blobs, and not page blobs.

Our next step is to choose Replication. There we have 4 options to choose.

Locally-redundant storage (LRS) –> data is stored in a datacenter in the region in which you created your storage account. LRS is the lowest cost option and offers least durability compared to other options. In the event of a datacenter level disaster (fire, flooding etc.) all replicas might be lost or unrecoverable. So with locally-redundant storage you get three copies within an Azure region of the data that gets stored in your storage account.

Zone redundant storage –> ZRS replicates your data synchronously across multiple availability zones. Consider ZRS for scenarios like transactional applications where downtime is not acceptable. It gives us read and write data even if a single zone is unavailable or unrecoverable. With zone-redundant storage you get three copies within two data centers.

Geo-redundant storage –> It replicates our data to a secondary region that is hundreds of miles away from the primary region. If your storage account has GRS enabled, then your data is durable even in the case of a complete regional outage or a disaster in which the primary region is not recoverable. For a storage account with GRS enabled, an update is first committed to the primary region. Then the update is replicated asynchronously to the secondary region, where it is also replicated.
Geo-redundant storage means that you’ve got three copies in a primary data center and three copies in another data center. We cannot access secondary site if there is no failover.

Read-access geo-redundant storage –> it maximizes availability for your storage account. RA-GRS provides read-only access to the data in the secondary location, in addition to geo-replication across two regions. Read-access geo-redundant storage where you’ve got, again, six copies total, three in one region, three in another, and that gives you the ability to read the data in both of those regions.

INFO! Key to point when talking about Performance (Standard vs Premium) is that you’ll notice here while the performance is set to standard and you can click the replication dropdown, you’ve actually got more replication options with standard than you do with premium, so if you set it to premium, you’ll notice here that you only get locally-redundant storage and so that’s kind of the difference between those two.

Access tiers : Cool and Hot storage can be chosen as the default access to you for an entire storage account, whereas you have to set the Archive tier per blob.

HOT –> This is the default option. Hot storage is for data that you know you’ll need to access very frequently. Hot data is always at the ready when you need it, and if you know you’re going to need to access your data at least once a month, you should keep it Hot. Accessing data in the Hot tier is most cost-effective, while storage costs are somewhat higher.

COLD –> is optimized for storing large amounts of data that is infrequently accessed and stored for at least 30 days. Storing data in the Cool tier is more cost-effective, but accessing that data may be somewhat more expensive than accessing data in the Hot tier.

ARCHIVE (We will see later how we can enable this) –> Consider using the Archive tier if you don’t expect to access your data within about six months. When you set a blob to use the Archive tier, you need to expect that when you do access it it’ll take some hours to retrieve it for you. Archive storage is actually being stored offline for you, so that’s why it takes a while to bring it up when you need it. And keep in mind that when you do access your Archive data you’re going to pay more for that access cost.

Once done click on Next:Advanced

Here we need to decide if we are going to use Secure transfer or not. Default is Enabled but for this example I will select Disabed.

Secure transfer required –> you’ve got the ability here to require secure transfer, so that would require HTTPS based connections when you’re accessing those blobs. Now one of the things you want to keep in mind here with the secure transfer required setting is that if you add a custom domain to your storage account, meaning you want to be able to access your files using a custom domain instead of the one provided by Azure, it will actually happen over HTTP because they don’t have a certificate with your domain name on it.

I will keep the defaults for the rest of the settings and click on review+ create

And after a few seconds our storage account will be created.

and we will be able to see it under storage accounts blade. When we click on our Storage Account, first that it comes is Overview. At the top you will see general info about our storage account, which resource group it belongs, location replication etc.

Open in explorer –> This will allow you to open storage account in Storage Account Explorer. When you click on it you will first have to download it. What is new in azure is that Storage Explorer is in Preview and you can use it directly in portal.

Under the general info we have service section. Blobs, Files, Tables and Queues. We will focus on Blobs and Files.

Let’s start with the Blobs.

Azure Blob Storage is basically designed to store large amounts of unstructured text or binary data, and in fact, blob actually stands for binary large object, and we can use blob storage to store files like virtual hard disks maybe videos that you want to stream from an application or images for a web application, and even logs files made up of plain text.

We have 3 different blob types:

  • PAGE BLOBS –> these are the blob types to store virtual hard disks in Azure Storage. So anytime you build a..
Read Full Article

Read for later

Articles marked as Favorite are saved for later viewing.
close
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Separate tags by commas
To access this feature, please upgrade your account.
Start your free month
Free Preview