Sunday, 7 March 2021

DevNet Specialized Partners Gain API Insights for Pandemic Challenges

Cisco Prep, Cisco Tutorial and Material, Cisco Certification, Cisco Exam Prep, Cisco Preparation

The network connects everything. It stands at the vortex of IT and business. It has the potential to constantly empower, protect and inform all IT and business processes. As users, devices, and distributed applications have grown in number, the networking environment has become exponentially more complex.

Intent-based networking transforms a hardware-centric, manual network into a controller-led network that captures business intent and translates it into policies that can be automated and applied consistently across the network. The goal is for the network to continuously monitor and adjust network performance to help assure desired business outcomes.

Cisco Prep, Cisco Tutorial and Material, Cisco Certification, Cisco Exam Prep, Cisco Preparation

Cisco DNA Center is the network management and command center for Cisco Digital Network Architecture (DNA), and it is at the heart of Cisco’s intent-based network. It enables you to:

◉ Configure and provision thousands of network devices across your enterprise in minutes, not hours.

◉ Deploy group-based secure access and network segmentation customized for your business needs.

◉ Monitor, identify, and react in real time to changing network and wireless conditions.

◉ Enhance the overall network experience by optimizing end-to-end IT processes, reducing total cost of ownership, and creating value-added network.

While our networks are evolving and becoming more complex, the world around us has also become even more complex. As nations continue to struggle with the challenges brought on by the COVID-19 pandemic, the workforce is likely eager to return to the office and at the very least, return to some level of normalcy.

Ensuring a safe return to the workplace

Many essential businesses have remained open during these trying times, and as restrictions begin to ease up, many will begin re-opening their offices soon and offering their employees the opportunity to conduct business in-person once again. As the global workforce emerges from their office exile, it is critically important to ensure strong safety measures are in place to minimize the risk and reduce the likelihood of a viral outbreak in the office.

Cisco Prep, Cisco Tutorial and Material, Cisco Certification, Cisco Exam Prep, Cisco Preparation

The unique circumstances that businesses face today call for a new breed of innovation to help get through these modern challenges. Developer APIs built on-top of Cisco’s technology can help facilitate that innovation. While traditional business problems solved through APIs are commonly associated with digital experiences, there is a mind shift brewing that begs the questions, “how can these APIs provide better physical experiences in this pandemic world?”  

Like building a network security policy, companies need to be able to have visibility, consistency, scalability, and adaptability across their infrastructure to reduce the attack surface and minimize the potential risks from threats (regardless of whether that threat is digital, physical, or biological). Programmability and APIs are the tools that arm developers and engineers with the visibility, consistency, scalability, and adaptability they need to help their businesses transform and prepare for a safe return to office life. 

Rich APIs can facilitate this new breed of innovation

While these rich APIs can facilitate this new breed of innovation, it is up to our partners to be able to deliver on that innovation. Cisco’s partners have been uniquely positioned to “be the bridge” that gets the world back to the office safely and deliver these new innovations to their customers.  

When it comes to Cisco customers knowing which partners to trust and deliver on that innovation, look no further than our growing list of DevNet Specialized Partners

The DevNet Specialization recognizes Cisco partners with demonstrated software skills and business practices that leverage API capabilities of Cisco products and services to deliver successful outcomes to their customers. A DevNet Specialized partner is one that is fully equipped to build and deliver on the innovations needed to make the return to the office safe and effective. 

While there are many benefits to the DevNet Specialization program, such as Ecosystem Exchange placement to co-market with Cisco and pre-sales consulting opportunities, one of the unique benefits we offer to our specialized partners is the exclusive API Insights webinar series, which provides the latest information on Cisco API releases, industry trends, best practices, and technical deep dives on API-related topics. 

The most recent API Insights webinar – offered exclusively to our DevNet Specialized partners – focused on how DNA Center can be leveraged to perform contact-tracing inside of an office building. It began with an overview of the API capabilities provided by DNA Center, making sure that our DevNet Specialized partners had a base understanding needed to advance the conversation.

Cisco Prep, Cisco Tutorial and Material, Cisco Certification, Cisco Exam Prep, Cisco Preparation

Cisco DNA Center “Pandemic Proximity” use case

From there, it dove into the “Pandemic Proximity” use case for DNA Center, covering all the implementation details needed to build and deliver this use case to customers. Partners were provided with a deep dive into the technical aspects of this use case and how Cisco technology can better track in-person interactions across the office, as far back as 14 days. 

If a business ever faces the unfortunate challenge of dealing with a viral outbreak in their office, they can use these capabilities, delivered by a DevNet Specialized partner, to understand where the contagious employee was in the office, and who they might have come into contact with. This can help reduce the impact caused by an outbreak, keep more employees safe, and help reduce the disruption to business that this may cause. It also has the potential to save lives in the process. 

While more information will be available in the future about the “Pandemic Proximity” use case for DNA Center, Cisco’s partners in the DevNet Specialization program are uniquely positioned to deliver on this use case today, having gained the necessary insights and knowledge from Cisco experts through the API Insights webinar. Although this quarter’s API Insights event has already concluded, I am already looking forward to what the API experts have in-store for next quarter, and how, together, we can all better the world through programmability and APIs. 

As a reminder, these API Insights webinars are available exclusively to partners that have already achieved their DevNet Specialization. I invite you to learn more about the DevNet Specialization so that you and your teams can also experience these exclusive insights/webinar events, and ultimately you can see how being DevNet Specialized can benefit your teams, your business and the business of your customers.

Saturday, 6 March 2021

Real-Time Translations, Improved Search Performance and More in the Webex App March Update

Webex App March Update

As the saying goes, March comes in like a lion and out like a lamb. So does the Cisco Webex app … at least the lion part. In this month’s release, we bring you the much-anticipated king of features: real-time translations in meetings. In messaging, we deliver a 4x improvement in Webex search performance. In Webex Calling, you’ll see key feature enhancements for media optimization, and in Unified CM, new call recording services among other exciting developments.

Meetings in Webex

◉ In late March, Webex will begin a trial of real-time translation* – from English to 100+ languages. That means, non-native English speakers and/or hearing-impaired participants can choose closed captioning translation from English to one of the 108 additional languages supported. Real-time translation aids understanding and creates a more inclusive meeting, where language no longer be a barrier to great collaboration. Imagine the impact real-time translation could have on a virtual global classroom or a multinational company all-hands where better understanding could result in greater engagement. And we have deeply embedded this capability into the Webex UI, so the user experience will be familiar and effortless. See it in action:

Enterprise customers can reach out to their Cisco sales rep to sign up for the real-time translation trial. The trial will also be enabled on with some restrictions. We will open the trial more broadly in May when the feature becomes generally available.

◉ Another long-awaited feature: Q&A is now supported in Webex Meetings. Together with previously released features – such as breakout sessions, co-hosts, and hard mute – you now have all the functionality you need to have a great training experience in Webex Meetings. The Q&A capability allows attendees to post questions in the Q&A panel with answers provided by the host and co-hosts. Multiple co-hosts can be assigned to the meeting, so you can have as many Q&A panelists as needed to conduct a highly effective training session. Teachers and corporate trainers now have powerful tools to conduct interactive and effective training sessions in Webex Meetings.

Cisco Collaboration, Cisco Learning, Cisco Tutorial and Material, Cisco Exam Prep, Cisco Preparation, Cisco Guides, Cisco Prep

◉ For scheduled meetings, we’ve improved the attendee join experience in the event they join before a meeting starts. Rather than having to hang up and dial back in, they can now wait in the pre-meeting lobby until the host arrives and even notify the host that they are waiting. This feature is already available for Webex Personal Room meetings, so this is making the experience consistent across Webex meeting types.

Messaging in Webex

With a 400 percent improvement in our search performance time, you will now enjoy lightning-fast results when you search for key words in Webex messaging. Webex will return near instantaneous results, which will make you more efficient than ever before. No more endless scrolling through spaces to find that particular message. You’ll also be able to narrow your search and find messages instantly with the addition of In: (In a space) and From: (From a contact) modifiers. These can be selected from the advanced search menu or typed straight into your search box. Or, speed things up with new keyboard shortcuts:

Command + F: Open search bar

Command + F +Shift: Open search in space

◉ Viewing, sending and navigating files are some of the most frequent actions we take every day as we collaborate. With this in mind, we have made major updates to the content tab including a new ‘list’ view option for reviewing files in chronological order, as well and the ability to drag and drop files into this area to share in the space. This enables you to keep all your project assets easily accessible in a space that is well organized.

Cisco Collaboration, Cisco Learning, Cisco Tutorial and Material, Cisco Exam Prep, Cisco Preparation, Cisco Guides, Cisco Prep

◉ Team spaces are great when a team project needs to be broken down into smaller sub-groups, allowing more efficient and precise collaboration. Originally, moderators only had control over the ‘General’ space. Now moderators have full control over all spaces within a Team. This gives them extra control and additional features such as the ability to add and remove participants and control the contents of a space including deleting other users’ messages.

Cisco Collaboration, Cisco Learning, Cisco Tutorial and Material, Cisco Exam Prep, Cisco Preparation, Cisco Guides, Cisco Prep

◉ When working across multiple spaces, it can be easy to get distracted and forget what you were working on. We have now added ‘forward and back’ arrows on the Webex app header to help guide you through your spaces and keep track of where you were.

Calling in Webex

◉ Making calls has never been easier in Webex. In the desktop client, you’ll now be able to input the phone number in the global search bar and press ‘Enter’ to make the call. You no longer need to navigate your mouse to click the audio or video call buttons. Or speed things up even more with new keyboard shortcuts:

Audio call:
Option + Command + C
Control + Alt + C

Video call:
Option + Command + U
Control + Alt + V

Cisco Collaboration, Cisco Learning, Cisco Tutorial and Material, Cisco Exam Prep, Cisco Preparation, Cisco Guides, Cisco Prep

◉ Webex app with Webex Calling: Media optimization (ICE) allows calls between the Webex app to keep media on premise. This helps businesses decrease bandwidth usage, reduce latency, and improve quality performance. No extra hardware or configuration is required. Backend support will launch by the end of March, and when integrated into the Webex app the first week of April, ICE will automatically improve call performance.

◉ Webex app with Unified CM: More controls are coming to call recording. If you’re set up by your administrator to record calls, you can now start and stop recordings as needed during your call, providing flexibility and greater control. If the call is being recorded, the recording continues if you move the call to another device, merge the call with another active call, or make it a conference call. A visual indicator light will be visible to let you know when a call is being recorded.

Cisco Collaboration, Cisco Learning, Cisco Tutorial and Material, Cisco Exam Prep, Cisco Preparation, Cisco Guides, Cisco Prep

Webex App Device Integration

◉ In situations where proximity is not available, you can now pair to a device using a 9-character code. For instance, if you’re on a guest network, simply get the code from the device and enter it into the Webex app device panel. Once paired to the device, you can use the device for audio/video, wireless screen share, and device control enabling you to be able to work the way you want with the device of your choice.

Cisco Collaboration, Cisco Learning, Cisco Tutorial and Material, Cisco Exam Prep, Cisco Preparation, Cisco Guides, Cisco Prep

◉ Closed captioning is now available on Webex devices for Webex Assistant for Meetings subscribers. Hosts were already able to turn on Webex Assistant for Meetings from their devices. Now, participants and hosts using Webex devices will also be able to see closed captioning, making the experience more aligned across Webex apps and devices.

Cisco Collaboration, Cisco Learning, Cisco Tutorial and Material, Cisco Exam Prep, Cisco Preparation, Cisco Guides, Cisco Prep

Thursday, 4 March 2021

Enable Consistent Application Services for Containers

Cisco Prep, Cisco Tutorial and Material, Cisco Learning, Cisco Certification, Cisco Guides, Cisco Exam Prep

Kubernetes is all about abstracting away complexity. As Kubernetes continues to evolve, it becomes more intelligent and will become even more powerful when it comes to helping enterprises manage their data center, not just at the cloud. While enterprises have had to deal with the challenges associated with managing different types of modern applications (AI/ML, Big data, and analytics) to process that data, they are faced with the challenge to maintain top-level network and security policies and gaining better control of the workload, to ensure operational and functional consistency. This is where Cisco ACI and F5 Container Ingress Services come into the picture.

F5 Container Ingress Services (CIS) and Cisco ACI

Cisco ACI offers these customers an integrated network fabric for Kubernetes. Recently, F5 and Cisco joined forces by integrating F5 CIS with Cisco ACI to bring L4-7 services into the Kubernetes environment, to further simplify the user experience in deploying, scaling, and managing containerized applications. This integration specifically enables:

◉ Unified networking: Containers, VMs, and bare metal

◉ Secure multi-tenancy and seamless integration of Kubernetes network policies and ACI policies

◉ A single point of automation with enhanced visibility for ACI and BIG-IP.

◉ F5 Application Services natively integrated into Container and Platform as a Service (PaaS)Environments

One of the key benefits of such implementation is the ACI encapsulation normalization. The ACI fabric, as the normalizer for the encapsulation, allows you to merge different network technologies or encapsulations be it VLAN or VXLAN into a single policy model. BIG-IP through a simple VLAN connection to ACI, with no need for an additional gateway, can communicate with any service anywhere.

Cisco Prep, Cisco Tutorial and Material, Cisco Learning, Cisco Certification, Cisco Guides, Cisco Exam Prep

Solution Deployment

To integrate F5 CIS with the Cisco ACI for the Kubernetes environment, you perform a series of tasks. Some you perform in the network to set up the Cisco Application Policy Infrastructure Controller (APIC); others you perform on the Kubernetes server(s). Rather than getting down to the nitty-gritty, I will just highlight the steps to deploy the joint solution.


The BIG-IP CIS and Cisco ACI joint solution deployment assumes that you have the following in place:

◉ A working Cisco ACI installation

◉ ACI must be integrated with vCenter VDS

◉ Fabric tenant pre-provisioned with the required VRFs/EPGs/L3OUTs.

◉ BIG-IP already running for non-container workload

Deploying Kubernetes Clusters to ACI Fabrics

The following steps will provide you a complete cluster configuration: 

Step 1. Run ACI provisioning tool to prepare Cisco ACI to work with Kubernetes

Cisco provides an acc_provision tool, to provision the fabric for the Kubernetes VMM domain and generate a .yaml file that Kubernetes uses to deploy the required Cisco Application Centric Infrastructure (ACI) container components. If needed, download the provisioning tool.

Next, you can use this provision tool to generate a sample configuration file that you can edit.

$ acc-provision--sample > aci-containers-config.yaml

We can now edit the sample configuration file to provide information from your network. With such a configuration file, now you can run the following command to provision the Cisco ACI fabric:

acc-provision -c aci-containers-config.yaml -o aci-containers.yaml -f kubernetes-<version> -a -u [apic username] -p [apic password]

Step 2. Prepare the ACI CNI Plugin configuration File

The above command also generates the file aci-containers.yaml that you use after installing Kubernetes.

Step 3. Preparing the Kubernetes Nodes – Set up networking for the node to support Kubernetes installation.

With provisioned ACI, you start to prepare networking for the Kubernetes nodes. This includes steps such as Configuring the VMs interface toward the ACI fabric, configuring a static route for the multicast subnet, configuring the DHCP Client to work with ACI, etc.

Step 4. Installing Kubernetes cluster

After you provision Cisco ACI and prepare the Kubernetes nodes, you can install Kubernetes and ACI containers. You can use any installation method you choose appropriate to your environment.

Step 5. Deploy Cisco ACI CNI plugin

When the Kubernetes cluster is up and running, you can copy the preciously generated CNI configuration to the master node, and install the CNI plug-in using the following command:

kubectl apply -f aci-containers.yaml

The command installs the following (PODs):

◉ ACI Containers Host Agent and OpFlex agent in a DaemonSet called aci-containers-host

◉ Open vSwitch in a DaemonSet called aci-containers-openvswitch

◉ ACI Containers Controller in a deployment called aci-containers-controller.

◉ Other required configurations, including service accounts, roles, and security context

Cisco Prep, Cisco Tutorial and Material, Cisco Learning, Cisco Certification, Cisco Guides, Cisco Exam Prep

For ‘the authoritative word on this specific implementation’, you can click here the workflow for integrating k8s into Cisco ACI for the latest and greatest.

After you have performed the previous steps, you can verify the integration in the Cisco APIC GUI. The integration creates a tenant, three EPGs, and a VMM domain. Each tenant will have the visibility of all the Kubernetes POD’s.

Cisco Prep, Cisco Tutorial and Material, Cisco Learning, Cisco Certification, Cisco Guides, Cisco Exam Prep

Install the BIG-IP Controller

The F5 BIG-IP Controller (k8s-bigip-ctlr) or Container Ingress Services, if you aren’t familiar, is a Kubernetes native service that provides the glue between container services and BIG-IP. It watches for changes and communicates those to BIG-IP delivered application services. These, in turn, keep up with the changes in container environments and enable the enforcement of security policies.

Once you have a running Kubernetes cluster deployed to ACI Fabric, you can follow these instructions to install BIG-IP Controller.

Use the kubectl get command to verify that the k8s-bigip-ctlr Pod launched successfully.

Cisco Prep, Cisco Tutorial and Material, Cisco Learning, Cisco Certification, Cisco Guides, Cisco Exam Prep

BIG-IP as a north-south load balancer for External Services

For Kubernetes services that are exposed externally and need to be load balanced, Kubernetes does not handle the provisioning of the load balancing. It is expected that the load balancing network function is implemented separately. For these services, Cisco ACI takes advantage of the symmetric policy-based redirect (PBR) feature available in the Cisco Nexus 9300-EX or FX leaf switches in ACI mode.

This is where BIG-IP Container Ingress Services (or CIS) comes into the picture, as the north-south load balancer. On ingress, incoming traffic to an externally exposed service is redirected by PBR to BIG-IP for that particular service.

Cisco Prep, Cisco Tutorial and Material, Cisco Learning, Cisco Certification, Cisco Guides, Cisco Exam Prep

If a Kubernetes cluster contains more than one IP pod for a particular service, BIG-IP will load balance the traffic across all the pods for that service. Besides, each new POD is added to BIG-IP pool dynamically.

Cisco Prep, Cisco Tutorial and Material, Cisco Learning, Cisco Certification, Cisco Guides, Cisco Exam Prep

Tuesday, 2 March 2021

Machine Reasoning is the new AI/ML technology that will save you time and facilitate offsite NetOps

Cisco Prep, Cisco Tutorials and Material, Cisco Career, Cisco Preparation, Cisco Guides

Machine reasoning is a new category of AI/ML technologies that can enable a computer to work through complex processes that would normally require a human. Common applications for machine reasoning are detail-driven workflows that are extremely time-consuming and tedious, like optimizing your tax returns by selecting the best deductions based on the many available options. Another example is the execution of workflows that require immediate attention and precise detail, like the shut-off protocols in a refinery following a fire alarm. What both examples have in common is that executing each process requires a clear understanding of the relationship between the variables, including order, location, timing, and rules. Because, in a workflow, each decision can alter subsequent steps.

So how can we program a computer to perform these complex workflows? Let’s start by understanding how the process of human reasoning works. A good example in everyday life is the front door to a coffee shop. As you approach the door, your brain goes into reasoning mode and looks for clues that tell you how to open the door. A vertical handle usually means pull, while a horizontal bar could mean push. If the building is older and the door has a knob, you might need to twist the knob and they push or pull depending on which side of the threshold the door is mounted. Your brain does all of this reasoning in an instant, because it’s quite simple and based on having opened thousands of doors. We could program a computer to react to each of these variables in order, based on incoming data, and step through this same process.

Now let’s apply these concepts to networking. A common task in most companies is compliance checking where each network device, (switch, access point, wireless controller, and router) is checked for software version, security patches, and consistent configuration. In small networks, this is a full day of work; larger companies might have an IT administrator dedicated to this process full-time. A cloud-connected machine reasoning engine (MRE) can keep tabs on your device manufacturer’s online software updates and security patches in real time. It can also identify identical configurations for device models and organize them in groups, so as to verify consistency for all devices in a group. In this example, the MRE is automating a very tedious and time-consuming process that is critical to network performance and security, but a task that nobody really enjoys doing.

Another good real world example is troubleshooting an STP data loop in your network. Spanning Tree Protocol (STP) loops often appear after upgrades or additions to a layer-2 access network and can data storms that result in severe performance degradation. The process for diagnosing, locating, and resolving an STP loop can be time-consuming and stressful. It also requires a certain level of networking knowledge that newer IT staff members might not yet have. An AI-powered machine reasoning engine can scan your network, locate the source of the loop, and recommend the appropriate action in minutes.

Cisco DNA Center delivers some incredible machine reasoning workflows with the addition of a powerful cloud-connected Machine Reasoning Engine (MRE). The solution offers two ways to experience the usefulness of this new MRE. The first way is something many of you are already aware of, because it’s been part of our AI/ML insights in Cisco DNA Center for a while now: proactive insights. When Cisco DNA Center’s assurance engine flags an issue, it may determine to send this issue to the MRE for automated troubleshooting. If there is an MRE workflow to resolve this issue, you will be presented with a run button to execute that workflow and resolve the issue. Since we’ve already mentioned STP loops, let’s take a look at how that would work.

When a broadcast storm is detected, AI/ML can look at the IP addresses and determine that it’s a good candidate for STP troubleshooting. You’ll get the following window when you click on the alert:

Cisco Prep, Cisco Tutorials and Material, Cisco Career, Cisco Preparation, Cisco Guides
Image 1: Broadcast storm detected

When you click on the button “Start Automate Troubleshooting” you spin-up the machine reasoning engine and it traces the host flaps. If it detects STP loops, you’ll see this window:

Cisco Prep, Cisco Tutorials and Material, Cisco Career, Cisco Preparation, Cisco Guides
Image 2: STP Loops Detected

Cisco Prep, Cisco Tutorials and Material, Cisco Career, Cisco Preparation, Cisco Guides
Image 3: STP loops identified by device and VLAN

Now click on view details and the MRE will present you the specifics for the related VLANs as well as a logical map of the loop with the name of the relevant devices and the VLAN number. All you need to do now is prune your VLANs in those switches, and you’ve solved a complex issue in just a couple minutes. The ease at which this problem is resolved shows how MRE can bridge the skill gap and enable lesser trained IT members to proactively resolve network issues. It also demonstrates that machines can discover, investigate, and resolve network issues much faster than a human can. Eliminating human latency in issue resolution can greatly improve user experience on your network.

Another example of a proactive workflow is the “PSIRT alert” that flag Cisco devices which have advisories for bug or vulnerability software patches. You will see this alert automatically, anytime Cisco has released a PSIRT advisory that is relevant to one of your devices. Simply click the PSIRT alert and the software patch will be displayed and ready to load. The Cisco DNA Center team is working hard to create more proactive MRE workflows, so you’ll see more of these automated troubleshooting solutions in future upgrades.

The second way to experience machine reasoning in Cisco DNA Center, is in the new “Network Reasoner Dashboard,” which is located in the “Tools” menu. There you will find five new buttons that execute automated workflows through the MRE.

Cisco Prep, Cisco Tutorials and Material, Cisco Career, Cisco Preparation, Cisco Guides
Image 4: Network Reasoner Dashboard

1. CPU Utilization: There are a number of reasons that the CPU in a networking device would be experiencing high utilization. If you have ever had to troubleshoot this, you know that the remediation list for this is quite long and the tasks involved are both time-consuming and require a seasoned IT engineer to perform. This button works through numerous tasks, such as IOS process, packets per second flow, broadcast storm, etc. It then returns a result with specific guided remediation to resolve the issue.

2. Interface Down: Understanding the reasons for an interface that doesn’t come up requires deep knowledge of virtual routing and forwarding (VRF). This means that your less experienced team members will likely escalate this issue to a higher level engineer to be resolved. Furthermore, unless your switch has the capability of advanced telemetry you would need to have physical access to the switch in order to rule out a Layer-1 problem such as an SPF, cables, connectors, patch panel, etc. This button compares the interface link parameters at each end, runs a loopback, ping, traceroute, and other tests before returning a result for the most likely cause.

3. Power supply: Cisco Catalyst switches can detect power issues related to inconsistent voltage, fluctuating input, no connection, etc. This is generally done on site with visible inspection of the interface and LEDs. The MRE workflow uses sensors and logic reasoning to determine the probable cause. So, press this button if you want to skip a trip to the switch site.

4. Ping Device: I know what you’re thinking, it’s so simple to ping a device. But, it does take time to open a CLI window and it’s a distraction from the window you have open. Now all you need to do is push a button and enter the target IP address.

5. Fabric Data Collection: Moving to a software defined network with a fully layered fabric and micro-segmentation has tremendous benefits, but it does take some training to master. This button will collect show command outputs from network devices for complete visibility of your overlay (virtual) network. Having clear visibility can help troubleshoot issues in your fabric network.

Now that you know what machine reasoning is, and what it can offer your team, let’s take a look at how it works. It all starts with Cisco subject matter experts that have created a knowledge base of processes required to achieve certain outcomes which are based on best practices, defect signatures, PSIRTs, and other data. Using a “workflow editor” these processes are encapsulated into a central knowledge base, located in the Cisco cloud. When the AI/ML assurance engine in Cisco DNA Center sees and issue, it will send this issue to the MRE, which then uses inferences to select a relevant workflow from the knowledge base in the cloud. Cisco DNA Center can then present remediation or execute a complete workflow to resolve the issue. In the case of the workflows on demand in the network reasoner dashboard, the MRE simply selects the workflow from the knowledge base and executes it.

Cisco Prep, Cisco Tutorials and Material, Cisco Career, Cisco Preparation, Cisco Guides
Figure 1: MRE architecture

If you’re following my description of the process on the image above, you’ll notice I left out a couple icons in the diagram: Community, Partners, and Governance. Cisco is inviting our DEVNET community and fabulous Cisco Partners to create and publish MRE workflows. In conjunction with Cisco CX, we have developed a governance process, which works inside of our software Early Field Trials (EFT) program. This allows us to grow the library of workflows in the Network Reasoner window with industry-specific as well as other interesting and time-saving workflows. What tedious networking tasks would you like to automate? Let me know in the comments below!

If you haven’t yet installed the latest Cisco DNA Center software (version 2.1.2.x), the newly expanded machine reasoning engine is a great reason to do it. Look for continued development in our AI/ML machine reasoning engine in the coming releases with features for compliance verification (HIPPA, PCI, and DSS), network consistence checks (DNS, DHCP, IPAM, and AAA), security vulnerabilities (PSIRTs), and more.


Monday, 1 March 2021

Get Ready to Crack Cisco CCNP Security 300-710 Certification Exam

Cisco SNCF Exam Description:

The Securing Networks with Cisco Firepower v1.0 (SNCF 300-710) exam is a 90-minute exam associated with the CCNP Security, and Cisco Certified Specialist - Network Security Firepower certifications. This exam tests a candidate's knowledge of Cisco Firepower® Threat Defense and Firepower®, including policy configurations, integrations, deployments, management and troubleshooting. These courses, Securing Networks with Cisco Firepower, and Securing Network with Cisco Firepower Next-Generation Intrusion Prevention System help candidates prepare for this exam.

Cisco 300-710 Exam Overview:

Exam Name:- Securing Networks with Cisco Firepower
Exam Number:- 300-710 SNCF
Exam Price:- $300 USD
Duration:- 90 minutes
Number of Questions:- 55-65
Passing Score:- Variable (750-850 / 1000 Approx.)
Recommended Training:-
Exam Registration:- PEARSON VUE

Saturday, 27 February 2021

Optimize Real-World Throughput with Cisco Silicon One

Cisco Preparation, Cisco Learning, Cisco Tutorial and Material, Cisco Guides, Cisco Career

Switches are the heart of a network data plane and at the heart of any switch is the buffering subsystem. Buffering is required to deal with transient oversubscription in the network. The size of the buffer determines how large of a burst can be accommodated before packets are dropped. The goodput of the network depends heavily on how many packets are dropped.

The amount of buffer needed for optimal performance is mainly dependent on the traffic pattern and network Round-Trip Time (RTT).

The applications running on the network drive the traffic pattern, and therefore what the switch experiences. Modern applications such as distributed storage systems, search, AI training, and many others employ partition and aggregate semantics, resulting in traffic patterns that are especially effective in creating large oversubscription bursts. For example, consider a search query where a server receives a packet to initiate a search request. The task of mining through the data is dispatched to many different servers in the network. Once each server finishes the search it sends the results back to the initiator, causing a large burst of traffic targeting a single server. This phenomenon is referred to as incast.

Round-Trip Time

The network RTT is the time it takes a packet to travel from a traffic source to a destination and back. This is important because it directly translates to the amount of data a transmitter must be allowed to send into the network before receiving acknowledgment for data it sent. The acknowledgments are necessary for congestion avoidance algorithms to work and in the case of Transmission Control Protocol (TCP), to guarantee packet delivery.

For example, a host attached to a network with a 100Gbps link through a network with an RTT of 16us must be allowed to send at least 1.6Mb (16us * 100Gbps) of data before receiving an acknowledgment if it wants to be able to transmit at 100Gbps. In TCP protocol this is referred to as the congestion window size, which for a flow is ideally equal to the bandwidth delay product.

The amount of buffer a switch needs to avoid packet drop is directly related to this bandwidth delay product. Ideally a queue within a switch should have enough buffer to accommodate the sum of the congestion windows of all the flows passing through it. This guarantees that a sudden incast will not cause the buffer to overflow. For Internet routers this dynamic has been translated to a widely used rule of thumb – each port needs a buffer of average RTT times the port rate. However, the datacenter presents a different environment than the Internet. Whereas an Internet router can expect to see 10s of thousands of flows across a port with the majority bandwidth distributed across 1000s of flows, a datacenter switch often sees most of the bandwidth distributed over a few high bandwidth elephant flows. Thus, for a datacenter switch, the rule is that a port needs at most the entire switch bandwidth (not just the port bandwidth) times average RTT. In practice of course this can be relaxed by noting that this assumes an extremely pessimistic scenario where all traffic happens to target one port. Regardless, a key observation is that the total switch buffer is also the entire switch bandwidth times average RTT, just like for the Internet router case. Therefore, the most efficient switch design is one where all the buffer in the switch can be dynamically available to any port.

Cisco Preparation, Cisco Learning, Cisco Tutorial and Material, Cisco Guides, Cisco Career
Figure 1. Buffer requirement based on RTT

To help understand the round-trip times associated with a network, let’s look at a simple example. The RTT is a function of the network physical span, the delay of intermediate switches, and end nodes delay (that is the network adapters and the software stack). Light travels through fiber at about 5us per kilometer, so the contribution of the physical span is easy to calculate. For example, communication between two hosts in a datacenter with a total fiber span of 500 meters per direction will contribute 5us to the RTT. The delay through switches is composed of pipeline (minimum) delay and buffering delay.

Low delay switches can provide below 1us of pipeline delay. However, this is an ideal number based on a single packet flowing through the device. In practice, switches have more packets flowing through them simultaneously, and with many flows from different sources some minimum buffering in the switches is needed. Even a small buffer of 10KB will add almost 1us to the delay through a 100Gbps link.

Finally, optimized network adapters will add a minimum of two microseconds of latency, and often this is much more. So, putting this all together we can see that even a small datacenter network with 500 meters of cable span and three switching hops will result in a minimum RTT of around 16us. In practice, networks are typically never this ideal, having more hops and covering greater distances, with even greater RTTs.

Cisco Preparation, Cisco Learning, Cisco Tutorial and Material, Cisco Guides, Cisco Career
Figure 2. Simple Datacenter Network – minimum RTT

As can be seen from the figure above, supporting a modest RTT of 32us in a 25.6T switch requires up to 100MB. It’s important to notice at this point that this is both the total required buffer in the switch and the maximum required buffer for any one port. The worst-case oversubscription to any one port is when all incoming traffic happens to target one port. In this pathological incast case, all the buffer in the device is needed by the victim port to absorb the burst. Other oversubscribing traffic patterns involving multiple victim ports will require that the buffer be distributed in proportion to the oversubscription factor among the victim ports.

It’s also important to note that other protocols, like User Datagram Protocol (UDP) that are utilized by Remote Direct Memory Access (RDMA), don’t have the congestion feedback schemes used in TCP, and they rely on flow control to prevent packet loss during bursts. In this case the buffer is critical as it reduces the likelihood of triggering flow control, thus reducing the likelihood of blocking and optimizing overall network throughput.

Traditional Buffering Architectures

Unfortunately, since the buffer must handle extremely high bandwidth it needs to be integrated on the core silicon die, meaning off-chip buffering that can keep up with the total IO bandwidth is no longer possible as we discussed in our white paper, “Converged Web Scale Switching And Routing Becomes A Reality: Cisco Silicon One and HBM Memory Change the Paradigm”. On-die buffering in high bandwidth switches consumes a significant amount of die area and therefore it’s important to use whatever buffer can be integrated on-die in the most efficient way.

Cisco Preparation, Cisco Learning, Cisco Tutorial and Material, Cisco Guides, Cisco Career
Figure 3: Bandwidth growth in DDR memories and Ethernet switches

Oversubscription is an unpredictable transient condition that impacts different ports at different times. An efficient buffer architecture takes advantage of this by allowing the buffer to be dynamically shared between ports.

Most modern architectures support packet buffer sharing. However, not all claims of shared memory are equal, and not surprisingly this fact is usually not highlighted by the vendors. Often there are restrictions on how the buffer can be shared. Buffer sharing can be categorized according to the level and orientation of sharing, as depicted in the figures below:

Cisco Preparation, Cisco Learning, Cisco Tutorial and Material, Cisco Guides, Cisco Career
Figure 4. Shared buffer per output port group

A group of output ports share a buffer pool. Each buffer pool absorbs traffic destined to a subset of the output ports.

Cisco Preparation, Cisco Learning, Cisco Tutorial and Material, Cisco Guides, Cisco Career
Figure 5. Shared buffer per input port group

A group of input ports share a buffer pool. Each buffer pool absorbs traffic from a subset of the input ports.

Cisco Preparation, Cisco Learning, Cisco Tutorial and Material, Cisco Guides, Cisco Career
Figure 6. Shared buffer per input-output port group

A group of input and output ports share a buffer pool. Each buffer pool absorbs traffic from a subset of input ports for a subset of output ports.

In all the cases where there are restrictions on the sharing, the amount of buffer available for burst absorption to a port is unpredictable since it depends on the traffic pattern.

With output buffer sharing, burst absorption to any port is restricted to the individual pool size. For example, an output buffer architecture with four pools means that any output port can consume at most 25 percent of the total memory. This restriction can be even more painful under more complex traffic patterns, as depicted in the figure below, where an output port is restricted to 1/16th of the total buffer. Such restriction makes buffering behavior under incast unpredictable.

Cisco Preparation, Cisco Learning, Cisco Tutorial and Material, Cisco Guides, Cisco Career
Figure 7. Output buffered switch with 4 x 2:1 oversubscription traffic

With input buffer sharing, burst absorption depends on the traffic pattern. For example, in a 4:1 oversubscription traffic pattern with the buffer partitioned to four pools, the burst absorption capacity is anywhere between 25-100 percent of total memory.

Cisco Preparation, Cisco Learning, Cisco Tutorial and Material, Cisco Guides, Cisco Career
Figure 8. Input buffer utilization with 4:1 oversubscription traffic

Input-output port group sharing, like input buffer sharing, by design limits an output port to a fraction of the total memory. In the example of four pools, any one port is limited by design to half the total device buffer. This architecture further limits buffer usage depending on traffic patterns, as in the example below where an output port can use only 12.5 percent of the device buffer instead of 50 percent.

Cisco Preparation, Cisco Learning, Cisco Tutorial and Material, Cisco Guides, Cisco Career
Figure 9: input-output port group buffer architecture with 2 x 2:1 oversubscription traffic

Cisco Silicon One employs a fully shared buffer architecture as depicted in the figure below:

Cisco Preparation, Cisco Learning, Cisco Tutorial and Material, Cisco Guides, Cisco Career
Figure 10. Cisco Silicon One Fully Shared Buffer

In a fully shared buffer architecture, all the packet buffer in the device is available for dynamic allocation to any ports, so this means sharing the buffer among ALL the input-output ports without any restrictions. This maximizes the efficiency of the available memory and makes burst absorption capacity predictable as it’s independent of the traffic pattern. In the examples presented above, the fully shared architecture yields an effective buffer size that is at least four times the alternatives. This means that, for example, a 25.6T switch that requires up to 100MB of total buffer per device and per port needs exactly 100MB on-die buffer if it implements as a fully shared buffer. To achieve the same performance guarantee, a partially shared buffer design that breaks the buffer into four pool will need four times the memory.

The efficiency gains of a fully shared buffer also extend to RDMA protocol traffic. RDMA uses UDP which doesn’t rely on acknowledgments. Thus, RTT is not directly a driver of the buffer requirement. However, RDMA relies on Priority-based Flow Control (PFC) to prevent packet loss in the network. A big drawback of flow control is the fact that it’s blocking and can cause congestion to spread by stopping unrelated ports. A fully shared buffer helps to minimize the need to trigger flow control by virtue of supporting more buffering when and where it’s needed. Or in other words, it raises the bar of how much congestion needs to happen before flow control is triggered.

Friday, 26 February 2021

Preparing for the Cisco 300-620 DCACI Exam: Hints and Tips

CCNP Data Center certification confirms your skills with data center solutions. To obtain CCNP Data Center certification, you need to pass two exams: one that involves
core data center technologies and one data center concentration exam of your preference. For the CCNP Data Center concentration exam, you can craft out your certification to your technical area of focus. In this article, we will discuss concentration exam: 300-620 DCACI: Implementing Cisco Application Centric Infrastructure.

IT professionals who receive the CCNP Data Center certification are prepared for major roles in complicated Data Center environments, with expertise employing technologies involving policy-driven infrastructure, virtualization, automation and orchestration, unified computing, data center security, and integration of cloud initiatives. CCNP Data Center certified professionals are profoundly qualified for high-level roles engaged with empowering digital business transformation initiatives.

Cisco 300-620 Exam Details

The Cisco DCACI 300-620 exam is a 90-minute exam comprising of 55-65 questions that are associated with the CCNP Data Center Certification and Cisco Certified Specialist – Data Center ACI Implementation certifications. This exam tests an applicant's understanding of Cisco switches in ACI mode, comprising configuration, implementation, and management.

Tips That Can Help You Succeed in Cisco 300-620 DCACI Exam

There are a lot of things that the applicants ought to keep in mind to score well. Here they are:

  • To get a higher score, you should know that practice tests are essential for this exam. For this reason, you should make a structured plan of solving them on a daily basis. Practice tests will make you familiar with your gaps in exam preparation. Moreover, you will sharpen your time management skills.
  • Take ample time for your Cisco 300-620 exam preparation. CCNP Data Center certification exam may not appear difficult, but then again, you will notice that the questions asked are generally very tricky. Thorough preparation will wipe out confusion, and you will be more composed during your exam. Having a calm and composed mind during the exam without the last-minute breakout will improve the odds of passing the Cisco 300-620 DCACI exam. Hence, it is essential to prepare yourself well before sitting for the exam.
  • Get familiar with the Cisco 300-620 DCACI exam syllabus. The questions in the exam will come from the syllabus. Without it, you may be studying from inappropriate material that might be evaluated during the exam. You should ensure that you get the syllabus from the Cisco official website to determine that you include the essential areas. Study all the topics covered in the exam syllabus so that you don't leave out any important details.
  • Apart from following the syllabus and reading the essential and relevant material, video training can also help enhance your knowledge and sharpen your skills. It is also essential to read the relevant Cisco 300-620 DCACI book for acquiring mastery over exam concepts. Despite how the video training course and the instructor could be right, you will find out that they cannot bring up every important and theoretical detail.
  • Participate in an online community. Such groups can be of extreme help in passing Cisco 300-620 DCACI exam. Being part of such a community indicates that more heads are better than one. Studying together, you will be better positioned to grasp the concepts in a better way because another member of the community might have perceived them better.
  • When studying the exam by yourself, you might always visualize the study material from the same point of view. This might not be an issue, but getting familiar with different views on the subject can help you learn more comprehensively. You will be in a position to obtain distinct skills and share opinions with other people.


As an IT professional, it should be apparent that achieving a relevant certification is the sure-shot way to strengthen your status in the industry and scale up the corporate ladder. Expectedly, the above-mentioned tips will simplify your Cisco 300-620 DCACI certification journey and make your career aspirations to success.

Thursday, 25 February 2021

Cisco User Defined Network: Redefining Secure, Personal Networks

Cisco Prep, Cisco Learning, Cisco Tutorial and Material, Cisco Learning, Cisco Exam Prep, Cisco Preparation

Connecting all your devices to a shared network environment such as dorm rooms, classrooms, multi-dwelling building units, etc. may not be desirable as there are too many users/devices on the shared network and onboarding of devices is not secure. In addition, there is limited user control; that is, there is no easy way for the users to deterministically discover and limit access to only the devices that belong to them. You can see all users’ devices and every user can see your device. This, not only results in poor user experience but also brings in security concerns where users knowingly or unknowingly can take control of devices that may belong to other users. 

Cisco User Defined Network (UDN) changes the shared network experience by enabling simple, secure and remote on-boarding of wireless endpoints on to the shared network to give a personal network-like experience. Cisco UDN provides control to the end-users to create their own personal network consisting of only their devices and also enables the end-users the ability to invite other trusted users into their personal network. This provides security to the end-users at the same time giving them ability to collaborate and share their devices with other trusted users. 

Solution Building Blocks

The following are the functional components required for Cisco UDN Solution. This is supported in the Catalyst 9800 controllers in centrally switched mode.

Cisco Prep, Cisco Learning, Cisco Tutorial and Material, Cisco Learning, Cisco Exam Prep, Cisco Preparation
Figure 1. Solution Building Blocks

Cisco UDN Mobile App: The mobile app is used for registering user’s devices onto the network from anywhere (on-prem or off-prem) and anytime. End-user can log in to the mobile app using the credentials provided by the organization’s network administrator. Device on-boarding can be done in multiple ways. These include: 

◉ Scanning the devices connected to the network and selecting devices required to be onboarded

◉ Manually entering the MAC address of the device

◉ Using a camera to capture the MAC address of the device or using a picture of the mac address to be added

In addition, using mobile app, users can also invite other trusted users to be part of their private network segment. The mobile app is available for download both on Apple store and Google play store.

Cisco UDN Cloud Service: Cloud service is responsible for ensuring the registered devices are authenticated with Active Directory through SAML 2.0 based SSO gateway or Azure AD. Cloud service is also responsible for assigning the end-users and their registered devices to a private network and provides rich insights about UDN service with the cloud dashboard.

Cisco DNA Center: Is an on-prem appliance which connects with Cisco UDN cloud service. It is the single point through which the on-prem network can be provisioned (automation) and provides visibility through telemetry and assurance data. 

Identity Services Engine (ISE): Provides authentication and authorization services for the end-users to connect to the network.

Catalyst 9800 Wireless Controller and Access Points: Network elements which enables traffic containment within the personal network. UDN is supported on wave2 and Cisco Catalyst access points.

How does it work?

Cisco UDN solution focuses on simplicity and secure onboarding of devices. The solution gives flexibility to the end-users to invite other trusted users to be part of their personal network. The shared network can be segmented into smaller networks as defined by the users. Users from one segment will not be able to see traffic from another user segment. The solution ensures that broadcast, link-local multicast and discovery services (such as mDNS, UPnP) traffic from other user segments will not be seen within a private network segment. Optionally, unicast traffic from other segments can also be blocked. However, unicast traffic within a personal network and north-south traffic will be allowed. 


There are three main workflows associated with UDN:

1. Endpoint registration workflow: User’s endpoint can register with the UDN cloud service through a mobile-app from anywhere at any time (on-prem or off-prem). Upon registration, the cloud service ensures that the endpoint is authenticated with the active directory. Cloud service then assigns a private segment/network to the authenticated users and assigns a unique identity – User Defined Network ID (UDN-ID). This unique identity (UDN-ID) along with the user and endpoint information (mac address) is pushed from cloud service to on-prem through DNAC. The unique private network identity along with the user/endpoint information is stored in ISE 

2. Endpoint on-boarding workflow: When the endpoint joins the wireless network using one of the UDN enabled WLANs, as part of the authorization policy, ISE will push the private network ID associated with the endpoint to the wireless controller. This mapping of endpoint to UDN-ID is retrieved from ISE. The network elements (wireless LAN controller and access point), will use the UDN-ID to enforce traffic containment for the traffic generated by that endpoint

3. Invitation workflow: A user can invite another trusted user to be part of his personal network. This is initiated from the mobile app of the user who is inviting. The invitation will trigger a notification to the invitee through the cloud service. Invitee has an option to either accept or reject the invitation. Once the invitee has accepted the request, cloud service will put the invitee in the same personal network as the inviter and notify the on-prem network (DNAC/ISE) about the change of the personal room for the invitee. ISE will then trigger a change of authorization to the invitee and notify the wireless controller of this change. The network elements will take appropriate actions to ensure that the invitee belongs to the inviter’s personal room and enforces traffic containment accordingly

The following diagram highlights the various steps involved in each of the three workflows.

Cisco Prep, Cisco Learning, Cisco Tutorial and Material, Cisco Learning, Cisco Exam Prep, Cisco Preparation
Figure 2. UDN Workflows

Traffic Containment

Traffic containment is enforced in the network elements, wireless controller and access points. UDN-ID, an identifier for a personal network segment, is received by WLC from ISE as part of access-accept RADIUS message during either client on-boarding or change-of-authorization. Unicast traffic containment is not enabled by default. When enabled on a WLAN, unicast traffic between two different personal networks is blocked. Unicast traffic only within a personal network and north-south traffic will be allowed. Wireless controller enforces unicast traffic containment. The traffic containment logic in the AP ensures that the link-local multicast and broadcast traffic is sent as unicast traffic over the air to only the clients belonging to a specific personal network. The table below summarizes the details of traffic containment enforced on the network elements.

Cisco Prep, Cisco Learning, Cisco Tutorial and Material, Cisco Learning, Cisco Exam Prep, Cisco Preparation
Figure 3. UDN Traffic Containment

The WLAN on which UDN can be enabled should have either MAC-filtering enabled or should be an 802.1x WLAN. The following are the possible authentication combinations on which UDN can be supported on the wireless controller:

Cisco Prep, Cisco Learning, Cisco Tutorial and Material, Cisco Learning, Cisco Exam Prep, Cisco Preparation

For RLAN, only mDNS and unicast traffic can be contained through UDN. To support LLM and/or broadcast traffic, all clients on RLAN needs to be in the same UDN.

Monitor and Control

The end-to-end visibility into the UDN solution is enabled through both DNA cloud service dashboard and DNAC assurance. In addition, DNAC also enables configuring the UDN service through a single pane of glass. 

DNA Cloud Service provides rich insights with the cloud dashboard. It gives visibility into the devices registered, connected within a UDN and also information about the invitations sent to other trusted users etc. 

Cisco Prep, Cisco Learning, Cisco Tutorial and Material, Cisco Learning, Cisco Exam Prep, Cisco Preparation
Figure 4. Insights and Cloud Dashboard

On-Prem DNAC enables enablement of UDN through automation workflow and provides complete visibility of UDN through Client 360 view in assurance.

Cisco Prep, Cisco Learning, Cisco Tutorial and Material, Cisco Learning, Cisco Exam Prep, Cisco Preparation
Figure 5. UDN Client Visibility

Cisco UDN enriches the user experience in a shared network environment. Users can bring any device they want to the Enterprise network and benefit from home-like user experience while connected to the Enterprise network. It is simple, easy to use and provides security and control for the user’s personal network.