Understanding the Azure.Core library

>>In this episode of On.NET show, I’m going to have Jeff from
the Azure SDK Team join us to talk about some of the core features
that are in these new SDKs. [MUSIC]>>In this episode
of the On.NET show, I’m joined by Jeff from
the Azure SDK Team. He’s going to talk to us
about the Azure Core SDK. So Jeff, why don’t you
tell me first of all a little bit about who you are and
what exactly it is that you do.>>Sure. So I have been
in.NET for a long time, I’ve wrote a bunch of books and
videos and things like that. Now, I’m a software architect
on the Azure SDK Team, helping to architect these SDKs
since they have a high degree of consistency with any language
and even across languages.>>In a previous episode, we had some other folks on and they talked to us a little bit about, why do we have these new SDKs and what are some of the
core goals of these. This particular one, we’re going
to talk about the Core SDK. So my understanding is that
this is essentially features, like cross-cutting features that span across these different
libraries for all of these different services that
we have and just give us some common patterns and techniques that we might want to
reuse across the board.>>Yes.>>Yeah, okay.>>Yes, so I begin. So in some previous
team that I was in, we noticed that there are some core Cloud-native
fundamental features that you really want every
distributed application to have. When you’re using one
of our Client SDKs, you are building a
distributed app because your client is talking to
some of the Azure services. So we created this thing called an HTTP pipeline which is extensible. The way that it works, I’ll
demonstrate with this animated slide. So in most of our SDKs, you’ll be creating some client, like a blob client, or
a container client, or event hub Client, or so on. On those clients, there’s
various methods that you call. Those methods ultimately end up
creating an HTTP request object. That’s going to go to
some transport in.NET. It’s HTTP client but this
works in other languages too. Then the HTTP client is going
to send the request over to the particular Azure service
that you’re trying to communicate with and that’s
going to send a response. Then we’re going to do
some processing there and send it back as the
result from the method. Well, we want to have some certain behaviors that
are specific method agnostic. That is, these behaviors are the same method regardless of
which method you’re calling. So we’ve created this thing called
a pipeline and the pipeline, the definition of it is it’s
an ordered set of policies, and we have a bunch of policies that are pluggable into the pipeline. For example, one of the
policies that we have, and this is in that
Azure core library, the whole implementation of the pipelines is in the
Azure core library. One of the policies we have
is this client request ID. So when the HTTP request object is moving down through the pipeline, this goes and adds a header to the HTTP request with a unique client request ID so
you can track the request. Then it would move to another
policy in the pipeline, like the authentication one. Now, I know in the previous episode, Adrian and Alex talked about our credentials and our authentication
in Azure Identity Library. In the next episode, Scott’s going to present
more about that. So he’ll talk about
that in more detail. But we have lots of
different ways of doing authentication and how that affects the HTTP requests as
it’s flowing through the pipeline is impacted
by this policy. Then we have a logging policy, where we can log the outgoing requests has occurred
and what time it occurred, and then we can see the
response when it comes back in. So we can see how long it
took for the operation. Then distributed
tracing, which I know in the previous episode was also
discussed about a lot too where you can wire in
a mechanism here so you can go and send it
to Azure Monitor or some other storage system. So the request travels
down through all of these policies and then ultimately
makes it across the wire. When the service replies, it gives back an HTTP response
and the HTTP response flows backward through the pipeline going through these
policies in reverse order. So it would hit the
distributed tracing. So now the distributed
tracing knows that this has completed and
then it can go report it and it can go and hit logging and to log that the
result has come in. Some of these policies don’t
do anything with the response, like authentication or
a client request ID and then the response would eventually make it up to
your application code. Now, we have a few other policies
that we put into the pipeline. One is called a “Buffer Response”. This is used for HTTP responses
that return a payload, that’s usually JSON or XML. Unlike blobs which is binary data. The buffer response policy
will read all of the data from the service and
load it into memory so that when we get to the top of the pipeline we can de-serialize that into model objects that we then
return back to the customer code. Another policy that we
have which is the most complicated of all the policies
really is the retry policy. So if while making a request
something causes a failure, then the retry policy would catch that exception and then
it can loop around and it can actually issue the
request again and send the HTTP request message back through the policies
that are below retry. So another way of saying it is, everything between
retry in the transport will execute once per try and everything between the
method call and before retry will execute
once per method call. So we have some optimizations
there that’s happening. Also, for this buffer response. When we’re reading the
data from the service, it’s possible that the
connection might die. If that dies, then an error happens
in the buffer response policy, that error will be caught
by the retry policy. Again, I can reissue the request
to re-download the data. So what we’re trying to do with this mechanism is make it so
that your applications for a very reliable and robust to failure even without you having to
do anything as the customer, you just get it for free by
using our client classes. Now, I mentioned at the top left of the slide that this is an
extensible HTTP pipeline. By that, I mean that
you as the customer, you can actually create
your own policies and you can plug them
into the pipeline. We have two places where
you can plug them in. We have a place called
“Per-Call policies” where whatever policy you
create that derives off some base class and
overrides a virtual method, you can insert it here and then it will execute
once per method call. We have another place
where you can plug them in which is
“Per-Retry policies”. Any policy you plug into this location will
execute once Per-Retry. So it’s a super-powerful mechanism. It’s actually quite simple in its implementation and makes your application very
robust reliable.>>That’s interesting. So as
I’m looking at this diagram, this animation you have here. This reminds me a lot of
the ASP.NET Middleware, where I have the ability to plug into a pipeline and where I
can expect messages coming in, I can expect messages coming
out, I could change requests, I could do logging, and caching, and tracing, on all of
these types of things.>>Yes.>>Is that similar middleware?>>Yes, this is a very
similar type of middleware. This is for outgoing and
that’s for incoming.>>Got it.>>Messages coming into the service. This is outgoing from a client.>>Got you.>>Yes, quite similar.>>Okay. Also, like you have a lot of built-in features here that I can, like I just don’t
have to write myself.>>Yes.>>When we talk about
working in the Cloud, being able to handle things like
transient errors, for instance, and having into retries or authentication and
things of that nature. I really just want to focus
on my business case and whatever code I’m
trying to write and not necessarily some of these
other types of concern.>>Yes.>>So it’s good for me that I’m totally fine with Microsoft
like doing that for me instead.>>Yes, and over the
years, various Azure SDKs, some of them have offered
some of these features, like retries and some
of them have not. But now we provide them for all these SDKs that
we’re working on and in a very consistent way with a clean architecture that allows customers to
plug into it as well.>>Okay, awesome. So then
that means that I can create my own custom policy if I wanted to and plugins somewhere
inside of the pipeline.>>Yes, and in fact, I have a
demo where I can show doing that.>>Show us that.>>Okay. So here in Visual Studio, I have some C Sharp code where I can show you how to
configure the pipeline. That’s just taking the policies
that we have and configuring them. But I will also demonstrate
what you just asked about which is inserting
a custom policy into it. So first, I want to start down here. Usually, where customers
begin working with our client libraries is that
they will create some client. In this example, I’m creating a blob service client and I’m passing it into it a URI and I’m passing
in some credentials here. These credentials will end up
being used by the pipeline at that authentication policy
I showed in the slide. You can also pass in
some pipeline options over here to go and
configure the pipeline. Now, the options are optional. So you don’t have to pass them, and a lot of times customers don’t, in which case when you
knew up this client underneath the covers it
is creating a pipeline. That’s all with
default configuration, like default retries and so on. But I do want to show here
that you can customize that. So here in “Main”, I’m going to new up a blob
client options structure. Then I’m going to go and
initialize some of its fields. So there’s a retry section in there. I can go and set the
“MaxRetries” to some number. Here I’m setting it to 10. The maximum delay for each retry
I’m setting this to three, and there’s various
other options in here. I do encourage you to
explore with them, and look at them, and
read the documentation. Then here, I’m turning the logging on because it defaults to
off for diagnostics. So now once I’ve created this options object and
I’ve initialized it, I can then pass that to
the blob service client. But before I do that, to demonstrate what
you asked me about, on the “Options” I can also
call this function “AddPolicy”. To this method, I pass into a some policy object and I have
defined one down here below. This class called
“SimpleTracingPolicy”, it derives off a base class which
is in our Azure core library. Then I override this virtual
method “ProcessAsync”. This method will be called as your HTTP request goes
through the pipeline. You get to execute some code upfront. Here I’m just going to call
“Console.WriteLine” to prove that we made it to here and you’ll see
something in the console window. Then I call “ProcessNextAsync”, that’s a method I had heard from the base class and I pass the message down and the pipeline onto the
next policy in the pipeline. Then, when the HTTP response returns, the code after this will execute. Again, I’m just going to do
a Console.WriteLine so we can see the response
that has come back. So you can see how really simple it is to go and define
your own policy, just to override the one method
and then the code can be whatever you want it
to be and then just forward it on to the next
thing in the middle.>>Cool.>>So now when I call this add policy method I’m newing
up an instance of that class, and then I have to tell
the method whether I want this to go in
the Per-Call section, or the Per-Retry section
in the pipeline. Here I’m saying Per-Call, so it will execute only
once per operation has to pose to Per-Retry which
would execute once per try.>>So let’s talk about that.>>Sure.>>Part for a little bit.>>Okay.>>So I can just
re-picturing that diagram you showed us a little while ago. So we know that retry it happens at a certain point
within that pipeline.>>Yes.>>So essentially what this
call is doing it’s saying, I want this to happen. So right now what we’re looking
at here in the diagram.>>Yes.>>Like it’s going to
happen at that point.>>So I added it here in
the Per-Call section. So it will happen here
at the very beginning. Or I could have added it
in the Per-Retry section, in which case it would
happen to this location.>>But those are the two very specific extensibility
points that we have.>>That’s right.>>It could either be one of
the first ones that get called, or it could be, again, post-authentication in
within that Retry section.>>That is correct.>>Okay.>>Now, you can call the
method multiple time, so you can add multiple policies
in the Per-Retry section, or multiple policies in
the Per-Call section.>>Sure. So I another question too.>>Yeah.>>Again, just thinking about how at most middleware
pipeline and things work. Do I have the ability to circumvents
the flow of the messages? Can I shortcut the messages and say, hey, I don’t want you to continue.>>You can always insert a policy, and just not forward it on to
the next policy in the pipeline, and then you can just return. So that would be a useful thing
to do if you wanted to create a policy for example
to do fault injection, and pretend that the service
you’re trying to talk to his down, you could easily insert
a policy that maybe one out of every 10 requests
it just returns a failure.>>Sure.>>Then you could build
your application to be robust against those problems.>>That might also be an interesting scenario when
we are talking about testing.>>Absolutely, yes.>>A mock response for
instance or some sort, and so instead of going
through the entire pipeline, and doing retries and often
all these types of things, I could start mockups and responses and say in my integration tests, or my unit test, or whatever the case is, I could have some deterministic responses
that I could reactivate.>>Yes, definitely true. Other examples are things
like you could put a circuit breaker policy in there for those people who
know how that works, or client-side caching mechanism. So you might look locally
to see if I have the data and return it rather than
making a call to the service. So there’s numerous ideas of why the extensibility can
be useful to customers.>>Great, sounds good.>>All right. So now that
I’ve created the options, and I’ve set up some of the values, and I’ve added my own custom policy, I’ll now new up the Client
passing in those options. So now this Client-object has the URI and the pipeline
associated with it, and now I will go and make a call to do something
with this Client, but every call to do something with the Client goes through the pipeline. So when I hit F10 to execute this, you’ll see that in the console
window, here it is right here, that our custom policy
got the request, it was an HTTP GET request,
and here’s the URL, and you could look at other
things about the requests like headers and query
parameters and so on. Then when the response came
back in from the service, and this is all live,
it’s actually doing this. Then we see that we’ve got an HTTP 200 back and it was the
GET with the same URLs, what we got the 200 back from.>>Yeah.>>Another thing I’d like
to demonstrate which is very cool if you ask me, is that if you have a Client
object like this service Client, and then you call a method on it to go and create a child Client from it, if you were to create
another client from it. So here I’m going to create
a Blob Container Client for a Blob that’s in
this Blob container, and this is the name
of the container. When you do this, the new
Client object that you create will actually inherit the
same pipeline as the parent. So all the retries and all
the custom extensibility, and everything is now on this new Client object the Blob
Container Client. To demonstrate that, I will go to the container and tell
it I wish to delete it, and when I hit F10 on that
if you come back over here, you’ll see that my custom
policy did get invoked here, you can see the delete
operation came in, and we got a 202 back
from the service.>>That’s interesting. So
for the child’s clients, can I have a customized pipeline from a child as well, or is it
just using the example one?>>You can, but then you have to create the Child Client
out of thin air.>>Okay.>>Don’t create it from a parent.>>Got it, got it.>>Whenever you create a Client, you always get to
pass in the options, and then you’re configuring
a specific Client, or specific pipeline for that Client. If you create a Client from another Client then it
inherits the parent’s Client.>>Yeah, that makes sense. Got you, and then you’re saying
though all of our SDKs, like you were saying
before behave this way.>>That’s Correct.>>So we’re talking about storage
queues, or something else.>>Yes.>>Key vault, or
whatever the case is. The programming pattern
is fairly similar.>>Yeah, it would be pretty much, so since this is an Azure Core, the.NET version of Azure Core works identically across all of our SDK, or Client libraries for.NET. So we hope that people will start
to learn some of these concepts, and then once you learned it from one SDK you can apply that learning
to all the others, and even if you switch
programming languages, the concepts were still there, and they work in a very similar way.>>Sure.>>Just idiomatic to that language.>>This makes me think about comments I hear from a lot
of our customers which is, how can I mock out Azure services? So it feels this SDK is a good
step within that direction.>>Yes.>>Because when you
think about it a lot of our services have no local version.>>That’s true.>>No local version, no
local emulators or whatever, but it sounds like now this is a
good opportunity for either us, or community, or some other
folks to create some defaults, mocking mechanisms, so that now that we could work
with our things locally, we would work with it offline, and eventually when we
reconnect to the Internet, or reconnect to the major
[inaudible] like we could have the ability to just use the SDKs the same way we
don’t have to change.>>Yeah, definitely that’s true. Aside from the emulation
part which is complicated, the pipeline makes it easy to maybe redirect to a local IP
address rather than the remote one.>>Awesome. So I have one last
question I’ve been dying to ask.>>Go ahead.>>So I’m looking at
this HTTP pipeline, and it’s just reminiscent of so many other things
that I’ve seen before. So we already have HTTP Client, when they have delegating handlers, and all these types of things. Why didn’t we just use that? Why do we need a new
pipeline mechanism?>>So I would say the
main reason is that the delegating handlers are
attached to an HTTP Client, so if you wanted to
modify the options, we would have to create
a different HTTP Client, and the best practices for .NET, is that there’s one HTTP Client
for your entire application, and then you’re just re-sharing
that connection pool that it has. So that’s why we did not use delegating handlers underneath
the covers to do this, we built this on top of that.>>Got it.>>It’s more efficient really, we’re using resources
more efficiently.>>So you’d go ahead and
managing that lifetime for us, and that’s just one concern
that me as a developer, I don’t really have to
worry about too much.>>Yes.>>Cool, that sounds good. All right. So again, just like before
in a sec we always did, I want to make sure that anybody
that watches this video, we’re going to have some of these
code samples that are available, and make sure we direct them
to the right GitHub repos, or the right blog posts and samples, so that everybody could go
ahead and check them out.>>Yes.>>Again, these SDKs, these libraries are available today. All right, so you can
go and you get, you can download them for.NET specifically, but if you look into
any other languages.>>Other languages
too, yeah. Absolutely.>>You can do that too.>>Yes.>>Awesome. Thank you, Jeff.
I really appreciate it.>>I’m glad to help.>>Thank you all for watching. I
hope you tried these libraries, try it on SDKs for Azure, and then let us know what you think. Leave a comment down below, make sure you share and like
this video with your friends. Thank you for watching
this episode of ON.NET. [MUSIC]


Add a Comment

Your email address will not be published. Required fields are marked *