Home Resources Glossary

Glossary

Learn from the inside.
Extend your knowledge of the technology that revolutionizes the network security industry.

Glossary

  • A
  • B
  • C
  • D
  • E
  • F
  • G
  • H
  • I
  • J
  • K
  • L
  • M
  • N
  • O
  • P
  • Q
  • R
  • S
  • T
  • U
  • V
  • W
  • X
  • Y
  • Z

Featured Terms

  • Multi Cloud Networking (MCN)

    More and more businesses are saying goodbye to on-prem data centers and moving their workloads to the cloud. Gartner estimates by 2025, 85% of organizations will use a cloud-first principle, and 95% of new digital workloads will be deployed on cloud-native platforms, more than triple the figure from 2021 (30%).

    Migrating to the cloud brings with it a whole host of benefits, from scalability and reduced costs to availability and reliability. However, it also means rethinking security.

    Keeping your network protected becomes even more complicated, given the rise of multi-cloud and hybrid-cloud networks. Distributing workloads across various cloud and on-prem infrastructure creates inherent visibility issues. Enterprise customers struggle to get a comprehensive view of their network when it is dispersed across different environments and providers.

    This lack of visibility leads to operational governance blind spots with businesses trying to understand who is on their network and what they are doing.. While the network used to be the perimeter, the moat in the castle and moat model, it is now the last source of truth for what is truly going on.

    Businesses may think they have a clear understanding of their cloud workloads. Still, when they look under the hood, they start to see new dependencies, parts of the network they didn’t realize were talking to each other, or out-of-date systems, components previously thought to be redundant.

    It is surprising the number of enterprise customers that have limited controls in place to understand the data coming in and out of their network. Operators are in the perfect position to solve this, delivering cloud-native Intrusion Detection and Protection Systems (IDPSs) to protect multi-cloud networking customers.

  • Cloud on Ramp

    Network Operator has always been delivering fast, reliable, and secure connectivity to enterprise customers. And now these customers are on the move. They no longer operate from a single location, with much of their work transitioning to the public cloud. In fact, 76% of enterprises today already have a multi-cloud strategy, with a projected 90% adoption within two years.
    And the connectivity infrastructure must keep up, not only bringing enterprises to the doorstep of public clouds but also unlocking these doors for them and safely connecting them to their assets within the cloud.

    insidepacket cloud-on-ramp platform designed for the operator, extending the reach of infrastructure delivery into the public clouds. With onboarding and operational flows optimized for speed and ease, and a comprehensive suite of cloud-connectivity and security designed to address any hybrid and multi-cloud deployment scenario, it enables network operators to extend their infrastructure offering into the clouds and beyond.

    Read more about our Cloud on Ramp solution for Operators Here

  • Network as a Service (NaaS)

    Cloud Network-as-a-Service is an IT model where customers pay a subscription to operate the network they need. Like other “as-a-Service” solutions, NaaS allows businesses to rent services, outsourcing IT infrastructure to others without maintaining their own hardware. Vendors provide network functionality using software, meaning all companies need to add is connectivity.

    This new way of thinking about enterprise IT brings with it many upsides. Listed below are the top five NaaS benefits for companies and the top five reasons operators need to get in on the action, offering their customers the modern networks they want

    To learn more, read our Blog about The Top 5 NaaS Benefits for Improved Network Performance and Security

  • Secure Access Service Edge (SASE)

    The unprecedented rise of the hybrid workplace model stretched Enterprise IT architecture beyond its limits.
    Legacy IT networks were designed with the physical datacenter and perimeter defense in mind. Cloud
    architectures and services disrupted this.

    Users, devices, applications, and services are connected to various clouds – on-prem, private, hybrid, and public cloud services. This required a paradigm shift in security and connectivity – all solved by the SASE approach, based on Zero Trust Network Access.

    However, the heavy workloads, mission-critical applications, the ones that are at the core of your business were left behind. All are still running on monolithic security architectures, missing capabilities that the SASE approach offers.

    To learn more about insidepacket SASE solution click HERE

  • Multi-tenancy

    A multi-tenant environment is a software architecture in which multiple independent users share the same underlying infrastructure. This can include physical hardware, operating systems, storage, and networking resources. Each user, or tenant, is isolated from the others, and their data and applications are kept separate.

    Multi-tenancy environments can offer a number of benefits, including:

    • Cost savings: By sharing the same infrastructure, resources, and services, operators can save money on capital expenditures (CAPEX) and operating expenses (OPEX).
    • Increased efficiency: Multi-tenancy environments can help operators to improve efficiency by streamlining operations and reducing the need for manual intervention.
    • Improved scalability: Multi-tenancy environments can be easily scaled up or down to meet the changing needs of businesses. This can be beneficial for operators who need to support a large number of customers or who experience spikes in demand.
    • Improved security: Multi-tenancy environments can be more secure than traditional single-tenant environments. This is because operators can implement security measures that protect the entire environment, rather than just individual tenants.

    InsidePacket NaaS solution is designed for a high-scale multi-tenancy environment to simplify operations, improve security, enhance scalability, and reduce costs.

    To learn more about the benefits of insidepacket multi-tenancy architecture, read about NSOS and NIRO

  • Security Service Edge (SSE) Platform

    A Security Service Edge (SSE) platform is a cloud-delivered security solution that provides security at the edge of the network. SSE platforms typically include a variety of security services.

    SSE platforms can be deployed in a variety of ways, including:

    • As a cloud service: SSE platforms can be deployed as a cloud service, which is the most common deployment model. This allows organizations to easily scale the platform up or down to meet demand.
    • As a virtual appliance: SSE platforms can also be deployed as a virtual appliance on-premises. This is a good option for organizations that have strict security requirements or that need to comply with regulations.
    • As a hybrid deployment: SSE platforms can also be deployed in a hybrid fashion, with some components in the cloud and some components on-premises. This is a good option for organizations that want the flexibility of the cloud but also need the security of an on-premises deployment.

    SSE platforms offer a number of benefits, including:

    • Simplified security: SSE platforms can help organizations to simplify security by centralizing security controls and providing a single pane of glass for managing security.
    • Improved visibility: SSE platforms can help organizations to improve visibility into their network traffic and applications. This can help organizations to identify and respond to security threats more quickly.
    • Reduced costs: SSE platforms can help organizations to reduce costs by eliminating the need to purchase and maintain on-premises security appliances.

    insidepacket SSE platform contains a comprehensive services suite, fully flexible and consumed by demand – starting from a single service such as routing, firewall, blocking web threats, content filtering, and Multi-Cloud connectivity to a complete Secure Service Edge stack. The solution is inherently designed for multiple tenancies, and thus, designed for operators offering  SSE solutions for their enterprise customers.

    To learn more about insidepacket SSE platform, click HERE

  • White Box Switch

    White box switches are network switches that are assembled from standardized commodity parts and run on off-the-shelf chips. Unlike traditional switches, which are proprietary and come with their own software, white box switches are “blank” hardware that users can choose to develop or purchase a third-party network operating system (NOS) for.

    The open architecture of white box switches allows for the separation and disaggregation of switch software and hardware, improving network openness, flexibility, and programmability. This also enables a more transparent price while reducing OPEX/CAPEX and breaking free from vendor lock-in.

    InsidePacket has built the first in-the-world Network Security OS (NSOS) that runs on commodity-of-the-shelf hardware. Their revolutionary technology is capable of orders-of-magnitude acceleration of L2-L7 packet processing and services on any programmable hardware, including SmartNICs and white box switches.

    To learn more about the benefits of insidePacket architecture, read about NSOS.

  • latency in network

    Latency in a network refers to the time it takes for a packet of data to travel from one point to another on a network. It is measured in milliseconds (ms). Latency is important because it affects the performance of applications that rely on network communication, such as online gaming, video streaming, and VoIP.

    Latency is an important factor to consider when designing and deploying networks. By understanding the factors that can affect latency and taking steps to improve it, network administrators can ensure that applications that rely on network communication perform well.

    Reasons why latency is becoming more important for enterprise networks:

    • The rise of real-time applications: Real-time applications, such as online gaming, video conferencing, and VoIP, require low latency in order to function properly. If latency is too high, these applications will experience delays and jitter, which can make them unusable.
    • The growth of the Internet of Things (IoT): The IoT is connecting billions of devices to the Internet, which is increasing the amount of traffic on networks. This can lead to higher latency, especially for applications that are not designed to handle high volumes of traffic.
    • The adoption of cloud computing: Cloud computing is becoming increasingly popular for enterprises, as it offers a number of benefits, such as scalability and cost-effectiveness. However, cloud computing can also introduce latency, as data has to travel to and from the cloud servers.
    • The need for security: Enterprises are increasingly adopting security measures, such as firewalls and intrusion detection systems, to protect their networks from cyberattacks. These security measures can add latency to network traffic, as they have to inspect each packet of data.

    As the demand for real-time applications and services continues to grow, latency will become even more important for enterprise networks. Enterprises need to take steps to reduce latency in their networks in order to ensure that their applications and services perform well.

    Insidepacket’s inspect once paradigm technology:  In classic service networks, to provide a customer with all the network and security services NSOS has to offer, each data packet needs to traverse multiple separate systems from different vendors. With NSOS it is processed once, the relevant data is collected and shared across all services, resulting in coherent decision-making, protection and unparalleled performance, and low latency. Make it a perfect fit to latency-sensitive applications.

    To learn more about the benefits of insidepacket inspect once paradigm architecture, read about NSOS

  • Brownfield insertion into operator networks

    Brownfield insertion into operator networks is the process of adding new equipment or services to an existing network without making any major changes to the existing infrastructure. This can be done in a variety of ways, including:

    • Overlaying a new network on top of the existing network: This is the most common method of brownfield insertion. The new network is essentially a virtual network that runs on top of the existing network. This is a relatively easy and cost-effective way to add new services to an existing network.
    • Using software-defined networking (SDN): SDN can be used to brownfield insert new services into an existing network by decoupling the control plane from the data plane. This allows new services to be added to the network without having to make any changes to the physical infrastructure.
    • Using network function virtualization (NFV): NFV can be used to brownfield insert new services into an existing network by virtualizing network functions. This allows new services to be added to the network without having to purchase new hardware.

    Brownfield insertion into operator networks can be a complex process, but it can be a cost-effective way to add new services to an existing network. By carefully planning the brownfield insertion process, operators can minimize the impact on the existing network and ensure that the new services are implemented in a secure and reliable manner.

    However, there are also some challenges associated with brownfield insertion into operator networks, such as:

    • Complexity: Brownfield insertion can be a complex process. This is because it requires careful planning and coordination.
    • Security: Brownfield insertion can introduce security risks. This is because it involves adding new equipment and services to the network.
    • Compatibility: Brownfield insertion can be challenging if the existing network is not compatible with the new equipment or services.
    • Performance: Brownfield insertion can impact the performance of the existing network. This is because it adds new traffic to the network.
    • Cost: Brownfield insertion can be a costly process. This is because it requires the purchase of new equipment and services.

    With insidepacket NaaS platform,  operators can enable multiple, scalable services for their tenants based on existing infrastructure, improving network and security services in seconds. No more integration between security and networking vendors, no complex infrastructure setups while still having full flexibility in deployment with built-in growth.

    The networking concept of insidepacket solution is built for NaaS. It abstracts tenant networking from the physical world by maintaining Virtual Interfaces (VIs) that are matched to L2 interfaces and ports, that represent the physical reality.

    Another aspect of this is that tenant networks are considered “overlay networks” and connectivity of the nodes themselves are considered “underlay”. This allows to fully virtualize the tenants’ networks while still controlling each NSOS Systems network.

    When a tenant is created on more than one system and Auto-Mesh is enabled, insidepacket solution will create a Mesh network between the different NSOS Systems for this specific tenant.

     

    To learn more about Insidepacket brownfield insertion technology, request a demo from one of our experts 

  • Software Define Networks (SDN)

    Software-defined networking (SDN) is an approach to networking that separates the control plane from the data plane. The control plane is responsible for managing the network, while the data plane is responsible for forwarding traffic. This separation allows for more flexible and programmable network management, making it easier to adapt to changing network requirements and optimize traffic flow.

    SDN is often used to improve network efficiency, scalability, and automation in data centers and large-scale networks. It is also required for any NaaS (Network as a Service) solution.

    InsidePacket’s NaaS solution is built on SDN principles. This means that all networking functions are virtualized and controlled by a software controller. The controller uses APIs to communicate with the underlying hardware, which allows for dynamic and efficient network management.

    For example, the InsidePacket controller can be used to create virtual networks, assign policies, and monitor traffic. It can also be used to automatically optimize the network for different workloads.

    By using SDN, InsidePacket can deliver a NaaS solution that is scalable, flexible, and easy to manage. This makes it a good choice for businesses of all sizes that need to improve their network performance and agility.

  • Network Function Virtualization (NFV)

    Network Function Virtualization (NFV) is the practice of running network functions, such as firewalls, NAT, and load balancers, as software applications on virtual machines (VMs). This allows for greater flexibility and agility in network operations, as well as the ability to scale network functions more easily.

    Traditionally, NFV has been implemented by deploying VNFs on a hypervisor, such as VMware vSphere or OpenStack. The VNFs are then stitched together by an orchestrator, such as OpenStack Heat or Kubernetes, to create a chain of services in a network.

    InsidePacket’s solution takes this approach even further by virtualizing the functions themselves and applying them directly to the data plane of the solution. This removes the need to chain services, and allows for more efficient and scalable network

  • Firewall

    A firewall can handle traffic based on connections, in addition to a standard ACL (Access List). Each connection is assigned a state, and packets are allowed based on whether they belong to a connection that is allowed or not. This approach is much more secure than ACLs, as it allows the firewall to track the state of each connection and only allow packets that are part of an established connection.

    This approach also makes the configuration of rule sets easier, as only the incoming direction needs to be configured. This is because the firewall will automatically allow returning packets for an allowed connection. Unsolicited packets with the same source and destination IPs will be blocked, regardless of their direction.

    Here are some of the benefits of using connection-based firewalling:

    • Increased security: Connection-based firewalling can help to prevent attacks such as SYN flooding and port scanning.
    • Simplified rule sets: Only the incoming direction needs to be configured, which makes rule sets easier to manage.
    • Improved performance: Connection-based firewalling can improve performance by reducing the number of packets that need to be inspected.

    Overall, connection-based firewalling is a more secure and efficient way to manage firewall traffic.

    Here are some additional details about connection-based firewalling:

    • The firewall keeps track of the state of each connection, such as the source and destination IP addresses, ports, and the direction of the traffic.
    • When a new connection is established, the firewall creates a state table entry for the connection.
    • The firewall allows packets that are part of an established connection.
    • The firewall blocks packets that are not part of an established connection.
    • The firewall can also track the state of connections that are in the process of being terminated.

    Connection-based firewalling is a more complex approach than ACL-based firewalling, but it offers several advantages in terms of security and performance.

  • Network Segmentation

    Network segmentation is a feature of a NaaS (Network as a Service) solution that allows enterprise customers to group and name connections into different segments of their network.

    If using a NaaS solution, benefits of network segmentation are :

    • Improved security: Network segmentation can help to prevent unauthorized access to data and resources.
    • Improved performance: Network segmentation can help to improve network performance by reducing the amount of traffic that needs to be routed.
    • Improved manageability: Network segmentation can help to simplify network management by making it easier to identify and troubleshoot problems.

    In InsidePacket NaaS solution, interfaces can even belong to multiple segments at once. This allows for greater flexibility and control over how traffic is routed. Communication can be controlled based on segments, both within segments and between segments.

    For example, you could create a segment for all of your web servers, and another segment for all of your database servers. You could then configure the firewall to only allow traffic between these two segments. This would help to protect your web servers from attacks that originate from your database servers.

    You could also create a segment for all of your users in the United States, and another segment for all of your users in Europe. You could then configure the firewall to only allow traffic between these two segments. This would help to improve performance by reducing the amount of traffic that needs to be routed between the two regions.

  • Cloud Router

    Every tenant in a cloud-based networking solution has their own cloud router instance. This router manages all connectivity for the tenant, including static and dynamic routing. It also provides information via Looking Glass and statistics on connections and neighbors.

    When a tenant connects resources to different locations in the network, the cloud router automatically and transparently builds a mesh network between the physical locations. This ensures that all routes and network resources are available to the tenant, regardless of where their assets are connected.

    Here are some of the benefits of this approach:

    • Tenant isolation: Each tenant has their own dedicated cloud router, which helps to isolate traffic and improve security.
    • Scalability: The cloud router can be scaled to meet the needs of the tenant, regardless of how many resources they have connected to the network.
    • Flexibility: The cloud router supports a variety of routing protocols, which gives the tenant the flexibility to choose the best routing solution for their needs.
    • Manageability: The cloud router is easy to manage using a graphical user interface (GUI) or application programming interfaces (APIs).
    • Performance: The cloud router is designed to provide high performance and low latency.

    Overall, this approach provides tenants with a reliable, secure, and scalable way to manage their network connectivity.

    Here are some additional details about the cloud router:

    • The cloud router is a virtual appliance that is deployed in the cloud.
    • The cloud router supports a variety of routing protocols, including OSPF, BGP, and RIP.
    • The cloud router can be configured using the GUI or APIs.
    • The cloud router provides a variety of features, including Looking Glass, statistics, and monitoring.