Complete Questions & Answers with Practical Examples
This comprehensive study guide provides detailed answers to common .NET Azure developer interview questions. Each answer includes practical examples, code snippets, and the depth of knowledge expected in technical interviews.
IEnumerable:
IQueryable:
Practical Example:
// IEnumerable - loads ALL products into memory first, then filters
IEnumerable<Product> products = dbContext.Products.ToList();
var expensiveProducts = products.Where(p => p.Price > 100); // Filters in memory
// IQueryable - builds SQL query, only loads filtered results
IQueryable<Product> productsQuery = dbContext.Products;
var expensiveProducts = productsQuery.Where(p => p.Price > 100); // Translates to SQL WHERE clause
// The SQL generated for IQueryable would be:
// SELECT * FROM Products WHERE Price > 100
// Performance difference:
// If you have 1 million products but only 10 are > $100:
// - IEnumerable loads 1 million records, then filters to 10
// - IQueryable loads only 10 records from the database
When to use each:
Use IEnumerable when working with small in-memory collections or when you need to apply complex logic that can't be translated to SQL
Use IQueryable when working with databases to optimize performance by filtering at the source
.NET uses an automatic memory management system with a generational garbage collector. Here's how it works:
Generational Collection:
The GC Process:
Code Example:
public class GarbageCollectionExample
{
public void DemonstrateGC()
{
// Objects created here are in Generation 0
var shortLived = new byte[1000];
// Force garbage collection (don't do this in production!)
GC.Collect(0); // Collect Generation 0
// If shortLived survives, it moves to Generation 1
var longLived = new byte[1000000];
// Check object generation
Console.WriteLine($"Generation: {GC.GetGeneration(longLived)}");
// Large objects (>85KB) go directly to Large Object Heap (LOH)
var largeObject = new byte[90000];
// LOH is collected with Generation 2
}
// Best practices for GC optimization
public void OptimizeForGC()
{
// 1. Use object pooling for frequently created objects
var stringBuilder = StringBuilderPool.Rent();
try
{
// Use the string builder
}
finally
{
StringBuilderPool.Return(stringBuilder);
}
// 2. Implement IDisposable for unmanaged resources
using (var stream = new FileStream("file.txt", FileMode.Open))
{
// Stream is disposed automatically
}
// 3. Avoid finalizers unless necessary
// 4. Use ValueTypes (struct) for small, immutable data
// 5. Minimize allocations in hot paths
}
}
IDisposable
pattern for deterministic cleanupAzure offers multiple caching solutions for different scenarios:
1. Azure Cache for Redis
public class RedisCacheService
{
private readonly IConnectionMultiplexer _redis;
private readonly IDatabase _cache;
public RedisCacheService(string connectionString)
{
_redis = ConnectionMultiplexer.Connect(connectionString);
_cache = _redis.GetDatabase();
}
public async Task<T> GetAsync<T>(string key, Func<Task<T>> factory, TimeSpan expiration)
{
var cached = await _cache.StringGetAsync(key);
if (cached.HasValue)
{
return JsonSerializer.Deserialize<T>(cached);
}
var value = await factory();
await _cache.StringSetAsync(key, JsonSerializer.Serialize(value), expiration);
return value;
}
}
2. In-Memory Caching (IMemoryCache)
public class InMemoryCacheService
{
private readonly IMemoryCache _cache;
public async Task<T> GetOrCreateAsync<T>(string key, Func<Task<T>> factory)
{
return await _cache.GetOrCreateAsync(key, async entry =>
{
entry.SlidingExpiration = TimeSpan.FromMinutes(5);
entry.AbsoluteExpirationRelativeToNow = TimeSpan.FromHours(1);
return await factory();
});
}
}
3. Azure CDN (Content Delivery Network)
4. Azure Front Door Caching
5. Application-Level Response Caching
[ApiController]
public class ProductController : ControllerBase
{
[HttpGet]
[ResponseCache(Duration = 300, VaryByQueryKeys = new[] { "category" })]
public async Task<IActionResult> GetProducts([FromQuery] string category)
{
// Response cached for 5 minutes per category
return Ok(await _productService.GetProductsAsync(category));
}
}
Caching Strategy Decision Matrix:
Cache Type | Use When | Pros | Cons |
---|---|---|---|
In-Memory | Single instance, small data | Fastest | Not distributed |
Redis | Distributed, complex data | Scalable, persistent | Network latency |
CDN | Static files, global users | Edge locations | Static content only |
Front Door | Dynamic content, global | Smart routing | More complex setup |
Distributed transactions in microservices are challenging because traditional ACID transactions don't work across service boundaries. Here are the main patterns:
1. Saga Pattern
Series of local transactions coordinated through events. Two types: Choreography and Orchestration
Orchestration Example:
public class OrderSaga
{
private readonly IServiceBus _serviceBus;
public async Task<bool> ProcessOrderAsync(Order order)
{
var sagaId = Guid.NewGuid();
var compensations = new Stack<Func<Task>>();
try
{
// Step 1: Reserve Inventory
var inventoryReserved = await _inventoryService.ReserveItemsAsync(order.Items, sagaId);
compensations.Push(async () => await _inventoryService.ReleaseReservationAsync(sagaId));
// Step 2: Process Payment
var paymentProcessed = await _paymentService.ChargeAsync(order.PaymentInfo, sagaId);
compensations.Push(async () => await _paymentService.RefundAsync(sagaId));
// Step 3: Create Shipment
var shipmentCreated = await _shippingService.CreateShipmentAsync(order, sagaId);
compensations.Push(async () => await _shippingService.CancelShipmentAsync(sagaId));
// All successful - confirm the transaction
await ConfirmSagaAsync(sagaId);
return true;
}
catch (Exception ex)
{
// Compensate in reverse order
while (compensations.Count > 0)
{
var compensation = compensations.Pop();
await compensation();
}
await _logger.LogSagaFailureAsync(sagaId, ex);
return false;
}
}
}
2. Event Sourcing with CQRS
public class OrderAggregate
{
private readonly List<IEvent> _events = new();
public void PlaceOrder(OrderDetails details)
{
// Business logic validation
if (!IsValid(details))
throw new InvalidOrderException();
// Raise domain event
RaiseEvent(new OrderPlacedEvent
{
OrderId = Guid.NewGuid(),
CustomerId = details.CustomerId,
Items = details.Items,
Timestamp = DateTime.UtcNow
});
}
private void RaiseEvent(IEvent @event)
{
_events.Add(@event);
Apply(@event);
}
private void Apply(IEvent @event)
{
// Update aggregate state based on event
switch (@event)
{
case OrderPlacedEvent e:
OrderId = e.OrderId;
Status = OrderStatus.Placed;
break;
}
}
}
3. Outbox Pattern
public class OutboxPattern
{
public async Task SaveOrderWithEventsAsync(Order order)
{
using var transaction = await _dbContext.Database.BeginTransactionAsync();
try
{
// Save business data
_dbContext.Orders.Add(order);
// Save events to outbox table in same transaction
var outboxEvent = new OutboxEvent
{
Id = Guid.NewGuid(),
EventType = "OrderCreated",
Payload = JsonSerializer.Serialize(order),
CreatedAt = DateTime.UtcNow,
Processed = false
};
_dbContext.OutboxEvents.Add(outboxEvent);
await _dbContext.SaveChangesAsync();
await transaction.CommitAsync();
}
catch
{
await transaction.RollbackAsync();
throw;
}
}
// Background service publishes events
public async Task ProcessOutboxEventsAsync()
{
var unpublishedEvents = await _dbContext.OutboxEvents
.Where(e => !e.Processed)
.OrderBy(e => e.CreatedAt)
.ToListAsync();
foreach (var @event in unpublishedEvents)
{
await _messageBus.PublishAsync(@event.EventType, @event.Payload);
@event.Processed = true;
@event.ProcessedAt = DateTime.UtcNow;
}
await _dbContext.SaveChangesAsync();
}
}
OAuth 2.0 / OpenID Connect Flow with Azure AD:
1. Configuration in Startup.cs:
public class Startup
{
public void ConfigureServices(IServiceCollection services)
{
services.AddAuthentication(OpenIdConnectDefaults.AuthenticationScheme)
.AddMicrosoftIdentityWebApp(Configuration.GetSection("AzureAd"));
services.AddAuthorization(options =>
{
options.AddPolicy("RequireAdminRole",
policy => policy.RequireRole("Admin"));
options.AddPolicy("RequireEmployeeScope",
policy => policy.RequireClaim("scope", "employee.read"));
});
services.AddMicrosoftIdentityWebApiAuthentication(Configuration, "AzureAd");
}
}
2. Authentication Flow Steps:
public class AuthenticationFlow
{
// Step 1: User attempts to access protected resource
[Authorize]
public class SecureController : Controller
{
public IActionResult Index()
{
// User is redirected to Azure AD login if not authenticated
return View();
}
}
// Step 2: After Azure AD authentication, tokens are received
public class TokenService
{
private readonly ITokenAcquisition _tokenAcquisition;
public async Task<string> GetAccessTokenAsync()
{
// Get token for calling downstream API
var scopes = new[] { "api://your-api/.default" };
var accessToken = await _tokenAcquisition.GetAccessTokenForUserAsync(scopes);
// Token structure:
// Header.Payload.Signature (JWT)
// Payload contains: sub (user), aud (audience), exp (expiry), roles, etc.
return accessToken;
}
}
// Step 3: Use token to call protected APIs
public class ApiClient
{
private readonly HttpClient _httpClient;
private readonly ITokenAcquisition _tokenAcquisition;
public async Task<T> CallProtectedApiAsync<T>(string endpoint)
{
var token = await _tokenAcquisition.GetAccessTokenForUserAsync(new[] { "api.scope" });
_httpClient.DefaultRequestHeaders.Authorization =
new AuthenticationHeaderValue("Bearer", token);
var response = await _httpClient.GetAsync(endpoint);
response.EnsureSuccessStatusCode();
return await response.Content.ReadFromJsonAsync<T>();
}
}
}
3. Token Validation in API:
[ApiController]
[Authorize]
public class ProtectedApiController : ControllerBase
{
[HttpGet]
[RequiredScope("read:data")]
public async Task<IActionResult> GetData()
{
// Azure AD validates:
// 1. Token signature
// 2. Issuer (Azure AD tenant)
// 3. Audience (your API)
// 4. Expiration
// 5. Scopes/Roles
var userId = User.FindFirst(ClaimTypes.NameIdentifier)?.Value;
var userName = User.Identity.Name;
return Ok(new { userId, userName });
}
}
4. Refresh Token Flow:
public class TokenRefreshService
{
public async Task<AuthenticationResult> RefreshTokenAsync(string refreshToken)
{
var app = ConfidentialClientApplicationBuilder
.Create(clientId)
.WithClientSecret(clientSecret)
.WithAuthority(authority)
.Build();
var accounts = await app.GetAccountsAsync();
try
{
// Silent token acquisition using refresh token
var result = await app.AcquireTokenSilent(scopes, accounts.FirstOrDefault())
.ExecuteAsync();
return result;
}
catch (MsalUiRequiredException)
{
// Refresh token expired, user must re-authenticate
throw new UnauthorizedException("Please sign in again");
}
}
}
Both are serverless compute options, but with key differences:
Azure Functions:
public static class FunctionExample
{
// HTTP Triggered Function
[FunctionName("ProcessOrder")]
public static async Task<IActionResult> Run(
[HttpTrigger(AuthorizationLevel.Function, "post")] HttpRequest req,
[Queue("orders")] IAsyncCollector<Order> orderQueue,
[Table("OrdersTable")] IAsyncCollector<OrderEntity> orderTable,
ILogger log)
{
var order = await req.Content.ReadFromJsonAsync<Order>();
// Multiple outputs from single function
await orderQueue.AddAsync(order);
await orderTable.AddAsync(new OrderEntity(order));
return new OkObjectResult($"Order {order.Id} processed");
}
// Timer Triggered Function
[FunctionName("CleanupOldData")]
public static async Task Cleanup(
[TimerTrigger("0 0 2 * * *")] TimerInfo timer, // Runs at 2 AM daily
ILogger log)
{
log.LogInformation($"Cleanup executed at: {DateTime.Now}");
// Cleanup logic
}
}
WebJobs:
public class WebJobExample
{
public static void Main()
{
var builder = new HostBuilder();
builder.ConfigureWebJobs(b =>
{
b.AddAzureStorageCoreServices();
b.AddAzureStorage();
});
var host = builder.Build();
using (host)
{
host.Run();
}
}
// Continuous WebJob
public static async Task ProcessQueueMessage(
[QueueTrigger("orders")] Order order,
[Blob("invoices/{rand-guid}.pdf")] Stream invoiceOutput,
ILogger logger)
{
logger.LogInformation($"Processing order {order.Id}");
// Long-running process
var invoice = await GenerateInvoiceAsync(order);
await invoice.CopyToAsync(invoiceOutput);
}
}
Comparison Table:
Feature | Azure Functions | Azure WebJobs |
---|---|---|
Hosting | Standalone or in App Service | Only in App Service |
Scaling | Auto-scales independently | Scales with App Service |
Triggers | HTTP, Timer, Queue, Blob, Event Hub, etc. | Queue, Blob, Manual |
Languages | C#, JavaScript, Python, Java, PowerShell | C#, JavaScript |
Pricing | Consumption or Premium plan | Part of App Service plan |
Development | Portal, VS, VS Code, CLI | Visual Studio |
Use Case | Event-driven, microservices | Background tasks, continuous processing |
Max Execution Time | 5 min (Consumption), unlimited (Premium) | Unlimited |
When to Choose:
Azure Functions: Event-driven scenarios, microservices, APIs, scheduled tasks
WebJobs: Long-running processes, complex scheduling, when already using App Service
Thread safety prevents race conditions when multiple threads access shared resources:
1. Locking Mechanisms:
public class ThreadSafeCounter
{
private int _count;
private readonly object _lockObject = new object();
// Basic lock
public void IncrementWithLock()
{
lock (_lockObject)
{
_count++;
}
}
// Monitor (equivalent to lock)
public void IncrementWithMonitor()
{
Monitor.Enter(_lockObject);
try
{
_count++;
}
finally
{
Monitor.Exit(_lockObject);
}
}
// ReaderWriterLockSlim for read-heavy scenarios
private readonly ReaderWriterLockSlim _rwLock = new();
private List<string> _data = new();
public string ReadData(int index)
{
_rwLock.EnterReadLock();
try
{
return _data[index];
}
finally
{
_rwLock.ExitReadLock();
}
}
public void WriteData(string value)
{
_rwLock.EnterWriteLock();
try
{
_data.Add(value);
}
finally
{
_rwLock.ExitWriteLock();
}
}
}
2. Concurrent Collections:
public class ConcurrentDataStructures
{
// Thread-safe collections
private readonly ConcurrentDictionary<int, string> _cache = new();
private readonly ConcurrentQueue<Task> _taskQueue = new();
private readonly ConcurrentBag<LogEntry> _logs = new();
public async Task ProcessConcurrently()
{
// Thread-safe dictionary operations
_cache.TryAdd(1, "value");
_cache.AddOrUpdate(1, "new", (key, old) => "updated");
// Producer-consumer pattern
var bc = new BlockingCollection<WorkItem>(100);
// Producer
Task.Run(() =>
{
for (int i = 0; i < 10; i++)
{
bc.Add(new WorkItem { Id = i });
}
bc.CompleteAdding();
});
// Consumer
Task.Run(() =>
{
foreach (var item in bc.GetConsumingEnumerable())
{
ProcessWorkItem(item);
}
});
}
}
3. Atomic Operations:
public class AtomicOperations
{
private int _counter;
private long _total;
// Thread-safe increment without locks
public int IncrementAtomic()
{
return Interlocked.Increment(ref _counter);
}
// Thread-safe compare and swap
public bool TryUpdate(int expected, int newValue)
{
return Interlocked.CompareExchange(ref _counter, newValue, expected) == expected;
}
// Thread-safe addition
public void AddToTotal(long value)
{
Interlocked.Add(ref _total, value);
}
}
4. Async/Await Patterns:
public class AsyncThreadSafety
{
private readonly SemaphoreSlim _semaphore = new(1, 1);
// Async-safe locking
public async Task<string> GetDataAsync()
{
await _semaphore.WaitAsync();
try
{
// Critical section
return await FetchDataAsync();
}
finally
{
_semaphore.Release();
}
}
// Async local storage
private static readonly AsyncLocal<string> _asyncLocalValue = new();
public async Task ProcessWithContext()
{
_asyncLocalValue.Value = "ContextData";
await Task.Run(() =>
{
// Value is preserved across async boundaries
Console.WriteLine(_asyncLocalValue.Value);
});
}
}
5. Immutable Data Structures:
public class ImmutableApproach
{
// Immutable class - inherently thread-safe
public class ImmutablePerson
{
public string Name { get; }
public int Age { get; }
public ImmutablePerson(string name, int age)
{
Name = name;
Age = age;
}
// Return new instance instead of modifying
public ImmutablePerson WithAge(int newAge)
{
return new ImmutablePerson(Name, newAge);
}
}
// Using System.Collections.Immutable
private ImmutableList<string> _list = ImmutableList<string>.Empty;
public void AddItem(string item)
{
// Thread-safe: creates new list
_list = _list.Add(item);
}
}
volatile
keyword for simple flag scenariosThreadLocal<T>
for thread-specific dataHow Async/Await Works:
public class AsyncAwaitExplained
{
// Async/await transforms this method into a state machine
public async Task<string> GetDataAsync()
{
// State 0: Before first await
var client = new HttpClient();
// State 1: Awaiting HTTP call
// Thread is released back to thread pool here
var response = await client.GetAsync("https://api.example.com/data");
// State 2: After HTTP call, possibly on different thread
var content = await response.Content.ReadAsStringAsync();
// State 3: Return result
return content;
}
}
When TO Use Async/Await:
public class GoodAsyncUsage
{
// I/O Bound Operations
public async Task<User> GetUserAsync(int id)
{
using var connection = new SqlConnection(connectionString);
await connection.OpenAsync();
var command = new SqlCommand("SELECT * FROM Users WHERE Id = @id", connection);
command.Parameters.AddWithValue("@id", id);
using var reader = await command.ExecuteReaderAsync();
if (await reader.ReadAsync())
{
return MapToUser(reader);
}
return null;
}
// Multiple concurrent operations
public async Task<Dashboard> GetDashboardAsync()
{
var userTask = GetUserAsync();
var ordersTask = GetOrdersAsync();
var statsTask = GetStatisticsAsync();
await Task.WhenAll(userTask, ordersTask, statsTask);
return new Dashboard
{
User = userTask.Result,
Orders = ordersTask.Result,
Statistics = statsTask.Result
};
}
}
When NOT to Use Async/Await:
1. CPU-Bound Operations:
// ❌ BAD: No actual async operation
public async Task<int> CalculatePrimeAsync(int n)
{
return await Task.Run(() =>
{
// CPU-intensive calculation
// This just moves work to another thread, doesn't save resources
return CalculatePrime(n);
});
}
// ✅ GOOD: Synchronous for CPU-bound
public int CalculatePrime(int n)
{
// Direct calculation
return CalculatePrime(n);
}
2. Simple, Fast Operations:
// ❌ BAD: Async overhead for simple operation
private readonly Dictionary<int, string> _cache = new();
public async Task<string> GetFromCacheAsync(int key)
{
return await Task.FromResult(_cache[key]);
}
// ✅ GOOD: Synchronous for fast operations
public string GetFromCache(int key)
{
return _cache[key];
}
3. Constructor or Property:
// ❌ BAD: Can't make constructor async
public MyClass()
{
// This blocks and can cause deadlocks
InitializeAsync().GetAwaiter().GetResult();
}
// ✅ GOOD: Factory pattern for async initialization
private MyClass() { }
public static async Task<MyClass> CreateAsync()
{
var instance = new MyClass();
await instance.InitializeAsync();
return instance;
}
4. Avoiding Async All the Way:
// ❌ BAD: Mixing sync and async
public void ProcessData()
{
var data = GetDataAsync().Result; // Can cause deadlock!
Process(data);
}
// ✅ GOOD: Async all the way up
public async Task ProcessDataAsync()
{
var data = await GetDataAsync();
Process(data);
}
5. Short-lived, High-frequency Operations:
// ❌ BAD: Async overhead for frequent calls
public async Task LogAsync(string message)
{
await Task.Run(() => Console.WriteLine(message));
}
// ✅ GOOD: Use synchronous or batch
private readonly Channel<string> _logChannel = Channel.CreateUnbounded<string>();
public void Log(string message)
{
_logChannel.Writer.TryWrite(message);
}
// Background service processes batches
public async Task ProcessLogsAsync()
{
await foreach (var message in _logChannel.Reader.ReadAllAsync())
{
Console.WriteLine(message);
}
}
Common Pitfalls:
public class AsyncPitfalls
{
// ❌ Async void (except event handlers)
public async void BadMethod()
{
await Task.Delay(1000);
// Exceptions here crash the application
}
// ✅ Return Task
public async Task GoodMethod()
{
await Task.Delay(1000);
}
// ❌ Unnecessary async/await
public async Task<string> UnnecessaryAsync()
{
return await GetStringAsync();
}
// ✅ Direct return
public Task<string> DirectReturn()
{
return GetStringAsync();
}
// ⚠️ ConfigureAwait for libraries
public async Task LibraryMethod()
{
await SomeOperationAsync().ConfigureAwait(false);
// Don't capture SynchronizationContext in libraries
}
}
Step-by-Step Migration Strategy:
Phase 1: Assessment & Planning
public class ApplicationAssessment
{
public class MigrationReadiness
{
public Dictionary<string, ComponentAnalysis> Components { get; set; }
public class ComponentAnalysis
{
public string Name { get; set; }
public List<string> Dependencies { get; set; }
public int CouplingScore { get; set; } // 1-10
public bool IsStateless { get; set; }
public string DataStore { get; set; }
public List<string> ExternalIntegrations { get; set; }
}
public MigrationStrategy DetermineStrategy()
{
// Analyze components for:
// 1. Tight coupling (database, shared state)
// 2. Business boundaries
// 3. Scalability requirements
// 4. Technology constraints
return new MigrationStrategy
{
Pattern = "StranglerFig", // or "BigBang", "BranchByAbstraction"
Priority = "CustomerFacing", // Start with edge services
Timeline = "6-12 months"
};
}
}
}
Phase 2: Strangler Fig Pattern Implementation
// Step 1: Add API Gateway
public class ApiGatewayConfiguration
{
public void Configure(IApplicationBuilder app)
{
app.UseOcelot(async (context, next) =>
{
// Route to monolith or new microservices
var path = context.Request.Path.Value;
if (path.StartsWith("/api/users"))
{
// Route to new User microservice
context.Request.Path = "/users" + path.Substring(10);
await next.Invoke();
}
else
{
// Route to monolith
context.Request.Host = new HostString("monolith.app");
await next.Invoke();
}
});
}
}
// Step 2: Extract first microservice
public class UserServiceExtraction
{
// Original monolith code
public class MonolithUserService
{
private readonly AppDbContext _context;
public User GetUser(int id)
{
return _context.Users
.Include(u => u.Orders) // Tight coupling
.FirstOrDefault(u => u.Id == id);
}
}
// New microservice
public class UserMicroservice
{
private readonly UserDbContext _userContext;
private readonly IOrderServiceClient _orderService;
public async Task<UserDto> GetUserAsync(int id)
{
var user = await _userContext.Users.FindAsync(id);
// Call Order service for order data
var orders = await _orderService.GetUserOrdersAsync(id);
return new UserDto
{
Id = user.Id,
Name = user.Name,
Orders = orders
};
}
}
}
Phase 3: Data Migration Strategy
public class DataMigrationStrategy
{
// Option 1: Shared Database Anti-pattern (temporary)
public class SharedDatabaseApproach
{
// Both monolith and microservice access same DB
// Use for gradual migration
public void ConfigureServices(IServiceCollection services)
{
services.AddDbContext<SharedDbContext>(options =>
options.UseSqlServer(sharedConnectionString));
}
}
// Option 2: Database per Service with Sync
public class DatabasePerService
{
public async Task MigrateDataAsync()
{
// 1. Dual writes during transition
await WriteToMonolithDb(data);
await WriteToMicroserviceDb(data);
// 2. Sync historical data
var batchSize = 1000;
var offset = 0;
while (true)
{
var batch = await GetMonolithData(offset, batchSize);
if (!batch.Any()) break;
await WriteBatchToNewDb(batch);
offset += batchSize;
// Track progress
await UpdateMigrationProgress(offset);
}
}
}
}
Phase 4: Azure Service Selection
Phase 5: Deployment Pipeline
// Azure DevOps Pipeline (YAML)
public class DeploymentConfiguration
{
public string GeneratePipeline()
{
return @"
trigger:
- main
stages:
- stage: Build
jobs:
- job: BuildMicroservices
steps:
- task: Docker@2
inputs:
command: 'buildAndPush'
repository: '$(containerRegistry)/user-service'
dockerfile: 'src/UserService/Dockerfile'
- stage: DeployDev
jobs:
- deployment: DeployToAKS
environment: 'dev'
strategy:
runOnce:
deploy:
steps:
- task: Kubernetes@1
inputs:
command: 'apply'
useConfigurationFile: true
configuration: 'k8s/deployment.yaml'
- stage: DeployProd
condition: and(succeeded(), eq(variables['Build.SourceBranch'], 'refs/heads/main'))
jobs:
- deployment: BlueGreenDeployment
environment: 'production'
strategy:
blueGreen:
activeService: 'user-service-active'
greenService: 'user-service-green'";
}
}
Architecture Overview:
public class HighAvailabilityArchitecture
{
public class AzureComponents
{
// Multi-region deployment
public List<string> Regions = new() { "East US", "West Europe" };
// Traffic Management
public string TrafficManager = "Azure Front Door"; // Global load balancing
// Compute Layer
public ComputeConfiguration Compute = new()
{
Primary = "Azure Kubernetes Service (AKS)",
AutoScaling = true,
MinReplicas = 3,
MaxReplicas = 100,
AvailabilityZones = new[] { "1", "2", "3" }
};
// Data Layer
public DataConfiguration Data = new()
{
Database = "Cosmos DB with multi-region writes",
Cache = "Azure Cache for Redis Premium",
Storage = "Geo-redundant Storage (GRS)",
CDN = "Azure CDN for static content"
};
}
}
Implementation Details:
1. Frontend Layer (High Availability)
// Azure Front Door configuration
public class FrontDoorConfiguration
{
public void ConfigureGlobalLoadBalancing()
{
var frontDoor = new FrontDoorConfig
{
BackendPools = new[]
{
new BackendPool
{
Name = "primary-pool",
Backends = new[]
{
new Backend { Address = "eastus.app.com", Priority = 1, Weight = 50 },
new Backend { Address = "westeu.app.com", Priority = 1, Weight = 50 }
},
HealthProbe = new HealthProbe
{
Path = "/health",
Interval = TimeSpan.FromSeconds(30),
Timeout = TimeSpan.FromSeconds(10)
}
}
},
RoutingRules = new[]
{
new RoutingRule
{
Pattern = "/*",
BackendPool = "primary-pool",
CacheEnabled = true,
CacheDuration = TimeSpan.FromMinutes(5)
}
}
};
}
}
2. API Layer (Microservices)
// Product Service with resilience
public class ProductService
{
private readonly IHttpClientFactory _httpClientFactory;
private readonly IMemoryCache _cache;
public async Task<Product> GetProductAsync(int id)
{
// Multi-layer caching
if (_cache.TryGetValue($"product_{id}", out Product cached))
{
return cached;
}
// Circuit breaker pattern
var httpClient = _httpClientFactory.CreateClient("ProductApi");
var policy = Policy
.Handle<HttpRequestException>()
.OrResult<HttpResponseMessage>(r => !r.IsSuccessStatusCode)
.WaitAndRetryAsync(
3,
retryAttempt => TimeSpan.FromSeconds(Math.Pow(2, retryAttempt)),
onRetry: (outcome, timespan, retryCount, context) =>
{
_logger.LogWarning($"Retry {retryCount} after {timespan}");
});
var response = await policy.ExecuteAsync(async () =>
await httpClient.GetAsync($"/api/products/{id}"));
if (response.IsSuccessStatusCode)
{
var product = await response.Content.ReadFromJsonAsync<Product>();
_cache.Set($"product_{id}", product, TimeSpan.FromMinutes(5));
return product;
}
// Fallback to Cosmos DB read
return await GetFromDatabaseAsync(id);
}
}
3. Data Layer (Multi-Region Cosmos DB)
public class CosmosDbConfiguration
{
public void ConfigureHighAvailability()
{
var cosmosClient = new CosmosClientBuilder(connectionString)
.WithApplicationRegion(Regions.EastUS) // Preferred region
.WithApplicationPreferredRegions(new[]
{
Regions.EastUS,
Regions.WestEurope,
Regions.SoutheastAsia
})
.WithConnectionModeDirect() // Better performance
.WithBulkExecution(true) // Bulk operations
.WithConsistencyLevel(ConsistencyLevel.Session) // Balance consistency/performance
.Build();
}
public async Task<Order> CreateOrderWithSagaAsync(Order order)
{
var container = cosmosClient.GetContainer("ecommerce", "orders");
// Implement idempotency
order.Id = order.IdempotencyKey ?? Guid.NewGuid().ToString();
try
{
// Transactional batch for consistency
var batch = container.CreateTransactionalBatch(
new PartitionKey(order.CustomerId))
.CreateItem(order)
.CreateItem(new Inventory { ProductId = order.ProductId, Quantity = -1 });
var response = await batch.ExecuteAsync();
if (response.IsSuccessStatusCode)
{
// Publish to Service Bus for order processing
await _serviceBus.SendMessageAsync(new ServiceBusMessage(
JsonSerializer.Serialize(order))
{
Subject = "OrderCreated",
SessionId = order.CustomerId // For FIFO processing
});
}
return order;
}
catch (CosmosException ex) when (ex.StatusCode == HttpStatusCode.Conflict)
{
// Idempotent - order already exists
return await GetOrderAsync(order.Id);
}
}
}
4. Messaging Layer (Service Bus)
public class OrderProcessingService
{
public void ConfigureServiceBus()
{
var client = new ServiceBusClient(connectionString, new ServiceBusClientOptions
{
TransportType = ServiceBusTransportType.AmqpWebSockets,
RetryOptions = new ServiceBusRetryOptions
{
MaxRetries = 3,
Delay = TimeSpan.FromSeconds(1),
MaxDelay = TimeSpan.FromSeconds(10),
Mode = ServiceBusRetryMode.Exponential
}
});
// Configure processor with session for ordered processing
var processor = client.CreateSessionProcessor("orders", new ServiceBusSessionProcessorOptions
{
MaxConcurrentSessions = 100,
MaxConcurrentCallsPerSession = 1, // Ensure order
SessionIdleTimeout = TimeSpan.FromMinutes(5)
});
}
}
5. Monitoring & Observability
public class ObservabilityConfiguration
{
public void ConfigureApplicationInsights(IServiceCollection services)
{
services.AddApplicationInsightsTelemetry();
services.AddSingleton<ITelemetryInitializer, CustomTelemetryInitializer>();
// Custom metrics
services.Configure<TelemetryConfiguration>((o) =>
{
o.TelemetryProcessorChainBuilder
.Use((next) => new AdaptiveSamplingTelemetryProcessor(next))
.Build();
});
}
public class CustomTelemetryInitializer : ITelemetryInitializer
{
public void Initialize(ITelemetry telemetry)
{
telemetry.Context.Properties["Environment"] = Environment.GetEnvironmentVariable("ASPNETCORE_ENVIRONMENT");
telemetry.Context.Properties["Region"] = Environment.GetEnvironmentVariable("AZURE_REGION");
if (telemetry is RequestTelemetry request)
{
// Add custom dimensions
request.Properties["UserId"] = HttpContext.Current?.User?.Identity?.Name;
}
}
}
}
Disaster Recovery Strategy:
Common LINQ Performance Issues and Solutions:
public class LinqOptimization
{
private readonly AppDbContext _context;
// ❌ BAD: N+1 Query Problem
public void BadExample()
{
var orders = _context.Orders.ToList();
foreach (var order in orders)
{
// Each iteration triggers a database query
Console.WriteLine(order.Customer.Name);
}
}
// ✅ GOOD: Eager Loading
public void GoodExample()
{
var orders = _context.Orders
.Include(o => o.Customer)
.ToList();
}
// ❌ BAD: Loading unnecessary data
public List<string> GetProductNames()
{
return _context.Products
.ToList() // Loads entire product objects
.Select(p => p.Name)
.ToList();
}
// ✅ GOOD: Project only needed fields
public List<string> GetProductNamesOptimized()
{
return _context.Products
.Select(p => p.Name) // SQL: SELECT Name FROM Products
.ToList();
}
// ❌ BAD: Multiple database round trips
public async Task<DashboardData> GetDashboardDataSlow()
{
var totalOrders = await _context.Orders.CountAsync();
var pendingOrders = await _context.Orders.Where(o => o.Status == "Pending").CountAsync();
var totalRevenue = await _context.Orders.SumAsync(o => o.Total);
return new DashboardData { TotalOrders = totalOrders, PendingOrders = pendingOrders, Revenue = totalRevenue };
}
// ✅ GOOD: Single query with aggregation
public async Task<DashboardData> GetDashboardDataOptimized()
{
return await _context.Orders
.GroupBy(o => 1) // Group all into single group
.Select(g => new DashboardData
{
TotalOrders = g.Count(),
PendingOrders = g.Count(o => o.Status == "Pending"),
Revenue = g.Sum(o => o.Total)
})
.FirstOrDefaultAsync();
}
// Compiled Queries for frequently used queries
private static readonly Func<AppDbContext, int, Task<Product>> GetProductById =
EF.CompileAsyncQuery((AppDbContext context, int id) =>
context.Products.FirstOrDefault(p => p.Id == id));
public Task<Product> GetProductByIdFast(int id)
{
return GetProductById(_context, id);
}
// AsNoTracking for read-only operations
public async Task<List<ProductDto>> GetProductsReadOnly()
{
return await _context.Products
.AsNoTracking() // Don't track entities for change detection
.Where(p => p.IsActive)
.Select(p => new ProductDto { Id = p.Id, Name = p.Name })
.ToListAsync();
}
}
Complete implementation using SignalR and Azure Services
1. SignalR Hub
public class NotificationHub : Hub
{
private readonly IConnectionManager _connectionManager;
public override async Task OnConnectedAsync()
{
var userId = Context.User.Identity.Name;
await _connectionManager.AddConnectionAsync(userId, Context.ConnectionId);
await Groups.AddToGroupAsync(Context.ConnectionId, $"user-{userId}");
await base.OnConnectedAsync();
}
public async Task SubscribeToCategory(string category)
{
await Groups.AddToGroupAsync(Context.ConnectionId, $"category-{category}");
}
}
2. Service Bus Message Handler
public class NotificationProcessor : IHostedService
{
private readonly ServiceBusProcessor _processor;
private readonly IHubContext<NotificationHub> _hubContext;
public async Task StartAsync(CancellationToken cancellationToken)
{
_processor.ProcessMessageAsync += HandleMessageAsync;
_processor.ProcessErrorAsync += HandleErrorAsync;
await _processor.StartProcessingAsync(cancellationToken);
}
private async Task HandleMessageAsync(ProcessMessageEventArgs args)
{
var notification = JsonSerializer.Deserialize<Notification>(args.Message.Body);
switch (notification.Type)
{
case NotificationType.User:
await _hubContext.Clients.Group($"user-{notification.UserId}")
.SendAsync("ReceiveNotification", notification);
break;
case NotificationType.Broadcast:
await _hubContext.Clients.All
.SendAsync("ReceiveNotification", notification);
break;
case NotificationType.Category:
await _hubContext.Clients.Group($"category-{notification.Category}")
.SendAsync("ReceiveNotification", notification);
break;
}
await args.CompleteMessageAsync(args.Message);
}
}
3. Notification Service
public class NotificationService
{
private readonly ServiceBusSender _sender;
private readonly INotificationRepository _repository;
public async Task SendNotificationAsync(Notification notification)
{
// Save to database for persistence
await _repository.SaveAsync(notification);
// Send to Service Bus for real-time delivery
var message = new ServiceBusMessage(JsonSerializer.Serialize(notification))
{
Subject = notification.Type.ToString(),
ContentType = "application/json",
TimeToLive = TimeSpan.FromHours(1)
};
await _sender.SendMessageAsync(message);
}
// Batch notifications for efficiency
public async Task SendBatchNotificationsAsync(List<Notification> notifications)
{
var messages = notifications.Select(n => new ServiceBusMessage(
JsonSerializer.Serialize(n))
{
Subject = n.Type.ToString()
}).ToList();
await _sender.SendMessagesAsync(messages);
}
}
4. Push Notifications (Mobile)
public class PushNotificationService
{
private readonly NotificationHubClient _hubClient;
public async Task SendPushNotificationAsync(string userId, string message)
{
var notification = new Dictionary<string, string>
{
{ "message", message },
{ "badge", "1" }
};
// Send to specific user's devices
await _hubClient.SendTemplateNotificationAsync(notification, $"userId:{userId}");
}
}
5. Client-side (JavaScript/TypeScript)
const connection = new signalR.HubConnectionBuilder()
.withUrl('/notificationHub')
.withAutomaticReconnect([0, 2000, 10000, 30000])
.build();
connection.on('ReceiveNotification', (notification) => {
// Handle notification
showToast(notification);
updateNotificationBadge();
// Store in IndexedDB for offline access
await db.notifications.add(notification);
});
// Reconnection handling
connection.onreconnecting(error => {
console.log('Reconnecting...', error);
showConnectionStatus('reconnecting');
});
connection.onreconnected(connectionId => {
console.log('Reconnected', connectionId);
showConnectionStatus('connected');
// Re-subscribe to categories
subscribeToCategories();
});
await connection.start();
6. Scaling Configuration
public class ScalingConfiguration
{
public void ConfigureServices(IServiceCollection services)
{
// Use Azure SignalR Service for scaling
services.AddSignalR()
.AddAzureSignalR(options =>
{
options.ConnectionString = Configuration["Azure:SignalR:ConnectionString"];
options.ServerStickyMode = ServerStickyMode.Required;
});
// Redis backplane for multiple servers
services.AddStackExchangeRedisCache(options =>
{
options.Configuration = Configuration["Redis:ConnectionString"];
});
}
}
This study guide covers the essential topics for .NET Azure developer interviews. Remember:
Focus on understanding the "why" behind each solution, as interviewers value problem-solving approach over memorized answers. Good luck with your interview preparation!