Cisco and NVIDIA: Strengthening AI and Security Partnerships

Here’s my take

This announcement is more tangible than the initial collaboration announcement back in February 2024. That announcement felt like an “us too” moment. Now with a mutual agreement to support Cisco’s Silicon One with NVIDIA Spectrum-X networking platform and Cisco is committed to supporting NVIDIAs Spectrum silicon with Cisco’s operating system.

This announcement means that there will be joint engineering to support heterogenous environment’s with performant and a determinant outcome enterprises expect from us. The added value of a heterogenous environment, it limits exposure to external threats which are increasingly on the rise seeking exploits to LLMs, supported by large enterprises like Meta.

Part of Cisco’s AI Security strategy is a software named, “AI Defense “ working to set guardrails and protection for enterprise to use these open source LLMs. The future of AI is supporting open ecosystems and partnerships. 

Cisco is focused on giving customers choice. Cisco’s AI PODs are focused on💡inferencing of AI (getting the model to produce an outcome). As models evolve to multistep reasoning, breaking down a complex request into multiple steps and in many cases showing their work to the user, there is a significant scaling law that requires more compute. Often referred to as test-time compute. Gemini 2.0 Flash, DeepSeekR1, o1-mini are examples of multi step reasoning. More reasoning can equate to more accurate responses and is critical for autonomous agents in AI and physical AI.

This also means there is a need for more proficient connectivity. @Johnathan Ross, CEO of Groq, has similar beliefs about test-time compute becoming 100x the expense of AI training. Although Cisco has a compute training offers, 885A and 845A, there are very few organizations that will invest in training at scale to create their own foundational model, rather they will augment the open source models with their own domain knowledge, transfer learning, and agentic AI.

Take a moment to broaden your view outside of AI and into Security. Not to far off and possibly now, with Microsoft’s release of Majorana we will be in a post quantum era. This era will require a network that can support, adapt, respond and withstand a post quantum era. Since Cisco’s Silicon One will be supported in NVIDIAs Spectrum switch, Cisco has essentially a kernel space for use to protect at the kernal level. Cisco’s acquisition of Isovalant can enable a distributed, highly secure fabric at that kernal level. It will be a requirement to have security so tightly coupled with the network and Cisco is in the best position to support that requirement with the release of N9300 data processing unit switch. 

    Keep exploring and happy engineering!

    Did you find this useful? I’m turning AI complexity into friendly chats & aha moments 💡- Join thousands in receiving valuable AI & ML content by subscribing to the weekly newsletter.

    What do you get for subscribing?

    • I will teach you about AI & ML practically
    • You will gain valuable insight on how to adopt AI
    • You will receive recommended readings and audio references for when you are on the go

    Mike

    Transforming Data: A Beginner’s Guide to Feature Engineering

    Have you ever wondered how machines can understand customer preferences, house prices, or even text messages? The answer lies in feature engineering – one of the most crucial yet often overlooked aspects of machine learning.

    What is Feature Engineering?

    Feature engineering 💡 transforms raw data into meaningful features that help machine learning models better understand patterns and make more accurate predictions. Think of it like translating raw ingredients into a form ready for cooking. Just as a chef needs properly prepared ingredients to make a delicious meal, a machine-learning model needs well-engineered features to make accurate predictions.

    Why is Feature Engineering Important?

    Even the most sophisticated machine learning algorithms can fail if fed poor-quality features. Here’s why feature engineering matters:

    1. Better Model Performance: Well-engineered features can capture important patterns in your data that might otherwise be hidden. For example, instead of using raw dates, creating features like “day of the week” or “is_weekend” might better predict shopping behavior.
    2. Domain Knowledge Integration: Feature engineering allows us to incorporate our understanding of the problem into the model. If we’re predicting house prices, we might create a feature that combines square footage and location, knowing that price per square foot varies by neighborhood.

    Understanding Data Types

    Before diving into feature engineering techniques, let’s understand the two main types of data we typically encounter:

    Quantitative Data

    This is numerical data that you can perform mathematical operations on. For example:

    • Age (25, 30, 45)
    • Temperature (98.6°F, 102.3°F)
    • Sales amount ($100, $250, $500)

    Qualitative Data

    This represents categories or qualities that can’t be measured numerically. For example:

    • Colors (Red, Blue, Green)
    • Education level (High School, Bachelor’s, Master’s)
    • Customer satisfaction (Very Satisfied, Satisfied, Dissatisfied)

    Essential Encoding Techniques for Beginners

    When working with qualitative data, we need to convert it into numbers for our machine-learning models. Here are two fundamental encoding techniques:

    One-Hot Encoding💡

    Imagine you have a “color” feature with values: Red, Blue, and Green. One-hot encoding creates separate columns for each unique value:

    This is perfect for categorical data where there is no natural order between values. Each category is given equal importance, and the model can treat them independently.

    Ordinal Encoding💡

    When your categories have a natural order (like education levels), ordinal encoding assigns numbers based on that order:

    Education Level    Encoded Value
    High School       1
    Bachelor's        2
    Master's          3
    PhD              4

    This preserves the relative relationship between categories while converting them to a numerical format the model can understand.

    Tips for Beginners

    1. Start Simple: Begin with basic feature engineering techniques and gradually explore more complex ones as you gain confidence.
    2. Understand Your Data: Before applying any encoding technique, understand what your data represents and how different features relate.
    3. Document Your Process: Track how you’ve engineered your features. This will help you replicate your success and troubleshoot issues.
    4. Validate Your Results: Always check if your feature engineering improves model performance. Sometimes simpler is better!

    Remember, feature engineering is both an art and a science. It requires creativity, domain knowledge, and experimentation. As you practice, you’ll develop an intuition for which techniques work best in different situations.

    Keep exploring and happy engineering!

    Did you find this useful? I’m turning AI complexity into friendly chats & aha moments 💡- Join thousands in receiving valuable AI & ML content by subscribing to the weekly newsletter.

    What do you get for subscribing?

    • I will teach you about AI & ML practically
    • You will gain valuable insight on how to adopt AI
    • You will receive recommended readings and audio references for when you are on the go

    Mike

    Sources:

    What Is One Hot Encoding and How to Implement It in Python

    Cisco AI Infrastructure PODs: Configurations for Every Inference Use Case

    I’m Turning AI complexity into friendly chats & aha moments 💡- Join thousands in receiving valuable AI & ML content by subscribing at the end of this post!

    AI Infrastructure PODs play a vital role in addressing the challenges and opportunities presented by the increasing adoption of AI. They offer a comprehensive, scalable, and performance-optimized solution that simplifies AI deployments and empowers organizations to unlock the full potential of AI across various applications and industries.

    Specifically focused on Inferencing, 💡AI inferencing is the process of using a trained artificial intelligence model to make predictions or decisions based on new, unseen data. So after you train a model you need to use the model. That is inferencing.

    Cisco’s AI Infrastructure PODs are pre-configured, validated bundles designed for various AI and ML use cases. These PODs offer different CPU, GPU, and memory resource configurations to meet specific workload requirements. Here’s a breakdown of the four configurations and their intended use cases:

    Cisco’s AI Infrastructure POD Configurations and Use Cases (Comparison Graph)
    👇

    Factors Influencing POD Selection

    The choice of POD configuration depends on several factors, including:

    • Model Size and Complexity: Larger, more complex models require more computational resources, typically provided by higher-end GPUs and more memory.
    • Performance Requirements: Applications requiring real-time responsiveness necessitate PODs with optimized performance characteristics, such as low latency and high throughput.
    • Scalability Needs: Organizations anticipating growth in AI workloads should opt for PODs that can scale dynamically by adding or removing resources as needed.
    • Use Case Specificity: Different use cases, such as edge inferencing, 💡Retrieval-Augmented Generation (RAG), which leverages knowledge sources to provide contextual relevance during a query, or large-scale model deployment, have distinct requirements that influence POD selection.

    Cisco’s AI Infrastructure PODs provide a flexible and scalable foundation for diverse AI workloads. By understanding each POD’s specific configurations and intended use cases, organizations can choose the optimal solution to accelerate their AI initiatives and unlock the potential of this transformative technology.

    Did you find this useful? I’m turning AI complexity into friendly chats & aha moments 💡- Join thousands in receiving valuable AI & ML content by subscribing to the weekly newsletter.

    What do you get for subscribing?

    • I will teach you about AI & ML practically
    • You will gain valuable insight on how to adopt AI
    • You will receive recommended readings and audio references for when you are on the go

    Mike

    Sources:
    AI PODs for Inferencing Data Sheet

    Generative AI Inferencing Use Cases with Cisco UCS

    AI PODs for Inferencing At a Glance