Skip to content

Commit

Permalink
Add support for loadBalancerSourceRanges in LoadBalancer Service
Browse files Browse the repository at this point in the history
For #5493

This commit introduces support for loadBalancerSourceRanges for LoadBalancer
Services.

Here is an example of a LoadBalancer Service configuration allowing access
from specific CIDRs:

```yaml
apiVersion: v1
kind: Service
metadata:
  name: sample-loadbalancer-source-ranges
spec:
  selector:
    app: web
  ports:
    - protocol: TCP
      port: 80
      targetPort: 80
  type: LoadBalancer
  loadBalancerSourceRanges:
    - "192.168.77.0/24"
    - "192.168.78.0/24"
status:
  loadBalancer:
    ingress:
      - ip: 192.168.77.152
```

To implement `loadBalancerSourceRanges`, a new table LoadBalancerSourceRanges is introduced
after table PreRoutingClassifier. Here are the corresponding flows:

```text
1. table=LoadBalancerSourceRanges, priority=200,tcp,nw_src=192.168.77.0/24,nw_dst=192.168.77.152,tp_dst=80 actions=goto_table:SessionAffinity",
2. table=LoadBalancerSourceRanges, priority=200,tcp,nw_src=192.168.78.0/24,nw_dst=192.168.77.152,tp_dst=80 actions=goto_table:SessionAffinity",
3. table=LoadBalancerSourceRanges, priority=190,tcp,nw_dst=192.168.77.152,tp_dst=80 actions=drop",
4. table=LoadBalancerSourceRanges, priority=0 actions=goto_table:SessionAffinity
```

Flows 1-2 allow packets destined for the for sample [LoadBalancer] from CIDRs specified in the `loadBalancerSourceRanges`
of the Service.

Flow 3, with lower priority, drops packets destined for the sample [LoadBalancer] that don't match any CIDRs within the
`loadBalancerSourceRanges`.

Signed-off-by: Hongliang Liu <[email protected]>
Signed-off-by: Hongliang Liu <[email protected]>
  • Loading branch information
hongliangl committed Sep 27, 2024
1 parent a996421 commit 647e567
Show file tree
Hide file tree
Showing 15 changed files with 497 additions and 134 deletions.
1 change: 1 addition & 0 deletions cmd/antrea-agent/agent.go
Original file line number Diff line number Diff line change
Expand Up @@ -182,6 +182,7 @@ func run(o *Options) error {
enableFlowExporter,
o.config.AntreaProxy.ProxyAll,
features.DefaultFeatureGate.Enabled(features.LoadBalancerModeDSR),
*o.config.AntreaProxy.ProxyLoadBalancerIPs,
connectUplinkToBridge,
multicastEnabled,
features.DefaultFeatureGate.Enabled(features.TrafficControl),
Expand Down
36 changes: 31 additions & 5 deletions docs/design/ovs-pipeline.md
Original file line number Diff line number Diff line change
Expand Up @@ -319,7 +319,8 @@ spec:

### LoadBalancer

A sample LoadBalancer Service with ingress IP `192.168.77.150` assigned by an ingress controller.
A sample LoadBalancer Service with ingress IP `192.168.77.150` assigned by an ingress controller, configured
`loadBalancerSourceRanges` to a CIDR list.

```yaml
apiVersion: v1
Expand All @@ -334,6 +335,9 @@ spec:
port: 80
targetPort: 80
type: LoadBalancer
loadBalancerSourceRanges:
- "192.168.77.0/24"
- "192.168.78.0/24"
status:
loadBalancer:
ingress:
Expand Down Expand Up @@ -919,7 +923,7 @@ If you dump the flows of this table, you may see the following:
```text
1. table=NodePortMark, priority=200,ip,nw_dst=192.168.77.102 actions=set_field:0x80000/0x80000->reg4
2. table=NodePortMark, priority=200,ip,nw_dst=169.254.0.252 actions=set_field:0x80000/0x80000->reg4
3. table=NodePortMark, priority=0 actions=goto_table:SessionAffinity
3. table=NodePortMark, priority=0 actions=goto_table:LoadBalancerSourceRanges
```

Flow 1 matches packets destined for the local Node from local Pods. `NodePortRegMark` is loaded, indicating that the
Expand All @@ -937,6 +941,28 @@ Note that packets of NodePort Services have not been identified in this table by
identification of NodePort Services will be done finally in table [ServiceLB] by matching `NodePortRegMark` and the
the specific destination port of a NodePort.

### LoadBalancerSourceRanges

This table is designed to implement `loadBalancerSourceRanges` for LoadBalancer Services.

If you dump the flows of this table, you may see the following:

```text
1. table=LoadBalancerSourceRanges, priority=200,tcp,nw_src=192.168.77.0/24,nw_dst=192.168.77.152,tp_dst=80 actions=goto_table:SessionAffinity",
2. table=LoadBalancerSourceRanges, priority=200,tcp,nw_src=192.168.78.0/24,nw_dst=192.168.77.152,tp_dst=80 actions=goto_table:SessionAffinity",
3. table=LoadBalancerSourceRanges, priority=190,tcp,nw_dst=192.168.77.152,tp_dst=80 actions=drop",
4. table=LoadBalancerSourceRanges, priority=0 actions=goto_table:SessionAffinity
```

Flows 1-2 are used to match the packets destined for the sample [LoadBalancer], and these packets are also from the
CIDRs within the `loadBalancerSourceRanges` of the Services.

Flow 3, having lower priority than that of flows 1-2, is also used to match the packets destined for the sample
[LoadBalancer], but these packets, which are not from any CIDRs within the `loadBalancerSourceRanges` of the Services,
will be dropped.

Flow 4 is the table-miss flow.

### SessionAffinity

This table is designed to implement Service session affinity. The learned flows that cache the information of the
Expand Down Expand Up @@ -978,7 +1004,7 @@ This table is used to implement Service Endpoint selection. It addresses specifi
3. LoadBalancer, as demonstrated in the example [LoadBalancer].
4. Service configured with external IPs, as demonstrated in the example [Service with ExternalIP].
5. Service configured with session affinity, as demonstrated in the example [Service with session affinity].
6. Service configured with externalTrafficPolicy to `Local`, as demonstrated in the example [Service with
6. Service configured with `externalTrafficPolicy` to `Local`, as demonstrated in the example [Service with
ExternalTrafficPolicy Local].

If you dump the flows of this table, you may see the following:
Expand Down Expand Up @@ -1081,7 +1107,7 @@ If you dump the flows of this table, you may see the following::
```

Flow 1 is designed for Services without Endpoints. It identifies the first packet of connections destined for such Service
by matching `SvcNoEpRegMark`. Subsequently, the packet is forwarded to the OpenFlow controller (Antrea Agent). For TCP
by matching `SvcRejectRegMark`. Subsequently, the packet is forwarded to the OpenFlow controller (Antrea Agent). For TCP
Service traffic, the controller will send a TCP RST, and for all other cases the controller will send an ICMP Destination
Unreachable message.

Expand Down Expand Up @@ -1312,7 +1338,7 @@ the following cases when Antrea Proxy is not enabled:
to complete the DNAT processes, e.g., kube-proxy. The destination MAC of the packets is rewritten in the table to
avoid it is forwarded to the original client Pod by mistake.
- When hairpin is involved, i.e. connections between 2 local Pods, for which NAT is performed. One example is a
Pod accessing a NodePort Service for which externalTrafficPolicy is set to `Local` using the local Node's IP address,
Pod accessing a NodePort Service for which `externalTrafficPolicy` is set to `Local` using the local Node's IP address,
as there will be no SNAT for such traffic. Another example could be hostPort support, depending on how the feature
is implemented.

Expand Down
4 changes: 4 additions & 0 deletions pkg/agent/openflow/client.go
Original file line number Diff line number Diff line change
Expand Up @@ -799,6 +799,9 @@ func (c *client) InstallServiceFlows(config *types.ServiceConfig) error {
if config.IsDSR {
flows = append(flows, c.featureService.dsrServiceMarkFlow(config))
}
if c.proxyLoadBalancerIPs && len(config.LoadBalancerSourceRanges) != 0 {
flows = append(flows, c.featureService.loadBalancerSourceRangesValidateFlows(config)...)
}
cacheKey := generateServicePortFlowCacheKey(config.ServiceIP, config.ServicePort, config.Protocol)
return c.addFlows(c.featureService.cachedFlows, cacheKey, flows)
}
Expand Down Expand Up @@ -940,6 +943,7 @@ func (c *client) generatePipelines() {
c.enableProxy,
c.proxyAll,
c.enableDSR,
c.proxyLoadBalancerIPs,
c.connectUplinkToBridge)
c.activatedFeatures = append(c.activatedFeatures, c.featureService)
c.traceableFeatures = append(c.traceableFeatures, c.featureService)
Expand Down
107 changes: 72 additions & 35 deletions pkg/agent/openflow/client_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -83,6 +83,7 @@ func skipTest(tb testing.TB, skipLinux, skipWindows bool) {
type clientOptions struct {
enableOVSMeters bool
enableProxy bool
proxyLoadBalancerIPs bool
enableAntreaPolicy bool
enableEgress bool
enableEgressTrafficShaping bool
Expand Down Expand Up @@ -393,9 +394,10 @@ func newFakeClientWithBridge(
) *client {
// default options
o := &clientOptions{
enableProxy: true,
enableAntreaPolicy: true,
enableEgress: true,
enableProxy: true,
enableAntreaPolicy: true,
enableEgress: true,
proxyLoadBalancerIPs: true,
}
for _, fn := range options {
fn(o)
Expand All @@ -412,6 +414,7 @@ func newFakeClientWithBridge(
false,
o.proxyAll,
o.enableDSR,
o.proxyLoadBalancerIPs,
o.connectUplinkToBridge,
o.enableMulticast,
o.enableTrafficControl,
Expand Down Expand Up @@ -1017,8 +1020,8 @@ func Test_client_GetPodFlowKeys(t *testing.T) {
"table=1,priority=200,arp,in_port=11,arp_spa=10.10.0.11,arp_sha=00:00:10:10:00:11",
"table=3,priority=190,in_port=11",
"table=4,priority=200,ip,in_port=11,dl_src=00:00:10:10:00:11,nw_src=10.10.0.11",
"table=17,priority=200,ip,reg0=0x200/0x200,nw_dst=10.10.0.11",
"table=22,priority=200,dl_dst=00:00:10:10:00:11",
"table=18,priority=200,ip,reg0=0x200/0x200,nw_dst=10.10.0.11",
"table=23,priority=200,dl_dst=00:00:10:10:00:11",
}
assert.ElementsMatch(t, expectedFlowKeys, flowKeys)
}
Expand Down Expand Up @@ -1254,17 +1257,18 @@ func Test_client_InstallServiceFlows(t *testing.T) {
port := uint16(80)

testCases := []struct {
name string
trafficPolicyLocal bool
protocol binding.Protocol
svcIP net.IP
affinityTimeout uint16
isExternal bool
isNodePort bool
isNested bool
isDSR bool
enableMulticluster bool
expectedFlows []string
name string
trafficPolicyLocal bool
protocol binding.Protocol
svcIP net.IP
affinityTimeout uint16
isExternal bool
isNodePort bool
isNested bool
isDSR bool
enableMulticluster bool
loadBalancerSourceRanges []string
expectedFlows []string
}{
{
name: "Service ClusterIP",
Expand Down Expand Up @@ -1449,6 +1453,38 @@ func Test_client_InstallServiceFlows(t *testing.T) {
"cookie=0x1030000000064, table=DSRServiceMark, priority=200,tcp6,reg4=0xc000000/0xe000000,ipv6_dst=fec0:10:96::100,tp_dst=80 actions=learn(table=SessionAffinity,idle_timeout=160,fin_idle_timeout=5,priority=210,delete_learned,cookie=0x1030000000064,eth_type=0x86dd,nw_proto=0x6,OXM_OF_TCP_SRC[],OXM_OF_TCP_DST[],NXM_NX_IPV6_SRC[],NXM_NX_IPV6_DST[],load:NXM_NX_REG4[0..15]->NXM_NX_REG4[0..15],load:0x2->NXM_NX_REG4[16..18],load:0x1->NXM_NX_REG4[25],load:NXM_NX_XXREG3[]->NXM_NX_XXREG3[]),set_field:0x2000000/0x2000000->reg4,goto_table:EndpointDNAT",
},
},
{
name: "Service LoadBalancer,LoadBalancerSourceRanges,SessionAffinity,Short-circuiting",
protocol: binding.ProtocolSCTP,
svcIP: svcIPv4,
affinityTimeout: uint16(100),
isExternal: true,
trafficPolicyLocal: true,
loadBalancerSourceRanges: []string{"192.168.1.0/24", "192.168.2.0/24"},
expectedFlows: []string{
"cookie=0x1030000000000, table=LoadBalancerSourceRanges, priority=200,sctp,nw_src=192.168.1.0/24,nw_dst=10.96.0.100,tp_dst=80 actions=goto_table:SessionAffinity",
"cookie=0x1030000000000, table=LoadBalancerSourceRanges, priority=200,sctp,nw_src=192.168.2.0/24,nw_dst=10.96.0.100,tp_dst=80 actions=goto_table:SessionAffinity",
"cookie=0x1030000000000, table=LoadBalancerSourceRanges, priority=190,sctp,nw_dst=10.96.0.100,tp_dst=80 actions=drop",
"cookie=0x1030000000000, table=ServiceLB, priority=210,sctp,reg4=0x10010000/0x10070000,nw_dst=10.96.0.100,tp_dst=80 actions=set_field:0x200/0x200->reg0,set_field:0x30000/0x70000->reg4,set_field:0x200000/0x200000->reg4,set_field:0x64->reg7,group:100",
"cookie=0x1030000000000, table=ServiceLB, priority=200,sctp,reg4=0x10000/0x70000,nw_dst=10.96.0.100,tp_dst=80 actions=set_field:0x200/0x200->reg0,set_field:0x30000/0x70000->reg4,set_field:0x200000/0x200000->reg4,set_field:0x65->reg7,group:101",
"cookie=0x1030000000065, table=ServiceLB, priority=190,sctp,reg4=0x30000/0x70000,nw_dst=10.96.0.100,tp_dst=80 actions=learn(table=SessionAffinity,hard_timeout=100,priority=200,delete_learned,cookie=0x1030000000065,eth_type=0x800,nw_proto=0x84,OXM_OF_SCTP_DST[],NXM_OF_IP_DST[],NXM_OF_IP_SRC[],load:NXM_NX_REG4[0..15]->NXM_NX_REG4[0..15],load:NXM_NX_REG4[26]->NXM_NX_REG4[26],load:NXM_NX_REG3[]->NXM_NX_REG3[],load:0x2->NXM_NX_REG4[16..18],load:0x1->NXM_NX_REG0[9],load:0x1->NXM_NX_REG4[21]),set_field:0x20000/0x70000->reg4,goto_table:EndpointDNAT",
},
},
{
name: "Service LoadBalancer,LoadBalancerSourceRanges,IPv6,SessionAffinity",
protocol: binding.ProtocolSCTPv6,
svcIP: svcIPv6,
affinityTimeout: uint16(100),
isExternal: true,
loadBalancerSourceRanges: []string{"fec0:192:168:1::/64", "fec0:192:168:2::/64"},
expectedFlows: []string{
"cookie=0x1030000000000, table=LoadBalancerSourceRanges, priority=200,sctp6,ipv6_src=fec0:192:168:1::/64,ipv6_dst=fec0:10:96::100,tp_dst=80 actions=goto_table:SessionAffinity",
"cookie=0x1030000000000, table=LoadBalancerSourceRanges, priority=200,sctp6,ipv6_src=fec0:192:168:2::/64,ipv6_dst=fec0:10:96::100,tp_dst=80 actions=goto_table:SessionAffinity",
"cookie=0x1030000000000, table=LoadBalancerSourceRanges, priority=190,sctp6,ipv6_dst=fec0:10:96::100,tp_dst=80 actions=drop",
"cookie=0x1030000000000, table=ServiceLB, priority=200,sctp6,reg4=0x10000/0x70000,ipv6_dst=fec0:10:96::100,tp_dst=80 actions=set_field:0x200/0x200->reg0,set_field:0x30000/0x70000->reg4,set_field:0x200000/0x200000->reg4,set_field:0x64->reg7,group:100",
"cookie=0x1030000000064, table=ServiceLB, priority=190,sctp6,reg4=0x30000/0x70000,ipv6_dst=fec0:10:96::100,tp_dst=80 actions=learn(table=SessionAffinity,hard_timeout=100,priority=200,delete_learned,cookie=0x1030000000064,eth_type=0x86dd,nw_proto=0x84,OXM_OF_SCTP_DST[],NXM_NX_IPV6_DST[],NXM_NX_IPV6_SRC[],load:NXM_NX_REG4[0..15]->NXM_NX_REG4[0..15],load:NXM_NX_REG4[26]->NXM_NX_REG4[26],load:NXM_NX_XXREG3[]->NXM_NX_XXREG3[],load:0x2->NXM_NX_REG4[16..18],load:0x1->NXM_NX_REG0[9],load:0x1->NXM_NX_REG4[21]),set_field:0x20000/0x70000->reg4,goto_table:EndpointDNAT",
},
},
}
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
Expand All @@ -1471,17 +1507,18 @@ func Test_client_InstallServiceFlows(t *testing.T) {
cacheKey := generateServicePortFlowCacheKey(tc.svcIP, port, tc.protocol)

assert.NoError(t, fc.InstallServiceFlows(&types.ServiceConfig{
ServiceIP: tc.svcIP,
ServicePort: port,
Protocol: tc.protocol,
TrafficPolicyLocal: tc.trafficPolicyLocal,
LocalGroupID: localGroupID,
ClusterGroupID: clusterGroupID,
AffinityTimeout: tc.affinityTimeout,
IsExternal: tc.isExternal,
IsNodePort: tc.isNodePort,
IsNested: tc.isNested,
IsDSR: tc.isDSR,
ServiceIP: tc.svcIP,
ServicePort: port,
Protocol: tc.protocol,
TrafficPolicyLocal: tc.trafficPolicyLocal,
LocalGroupID: localGroupID,
ClusterGroupID: clusterGroupID,
AffinityTimeout: tc.affinityTimeout,
IsExternal: tc.isExternal,
IsNodePort: tc.isNodePort,
IsNested: tc.isNested,
IsDSR: tc.isDSR,
LoadBalancerSourceRanges: tc.loadBalancerSourceRanges,
}))
fCacheI, ok := fc.featureService.cachedFlows.Load(cacheKey)
require.True(t, ok)
Expand Down Expand Up @@ -1527,11 +1564,11 @@ func Test_client_GetServiceFlowKeys(t *testing.T) {
assert.NoError(t, fc.InstallEndpointFlows(bindingProtocol, endpoints))
flowKeys := fc.GetServiceFlowKeys(svcIP, svcPort, bindingProtocol, endpoints)
expectedFlowKeys := []string{
"table=11,priority=200,tcp,reg4=0x10000/0x70000,nw_dst=10.96.0.224,tp_dst=80",
"table=11,priority=190,tcp,reg4=0x30000/0x70000,nw_dst=10.96.0.224,tp_dst=80",
"table=12,priority=200,tcp,reg3=0xa0a000b,reg4=0x20050/0x7ffff",
"table=12,priority=200,tcp,reg3=0xa0a000c,reg4=0x20050/0x7ffff",
"table=20,priority=190,ct_state=+new+trk,ip,nw_src=10.10.0.12,nw_dst=10.10.0.12",
"table=12,priority=200,tcp,reg4=0x10000/0x70000,nw_dst=10.96.0.224,tp_dst=80",
"table=12,priority=190,tcp,reg4=0x30000/0x70000,nw_dst=10.96.0.224,tp_dst=80",
"table=13,priority=200,tcp,reg3=0xa0a000b,reg4=0x20050/0x7ffff",
"table=13,priority=200,tcp,reg3=0xa0a000c,reg4=0x20050/0x7ffff",
"table=21,priority=190,ct_state=+new+trk,ip,nw_src=10.10.0.12,nw_dst=10.10.0.12",
}
assert.ElementsMatch(t, expectedFlowKeys, flowKeys)
}
Expand Down Expand Up @@ -2031,7 +2068,7 @@ func Test_client_setBasePacketOutBuilder(t *testing.T) {
}

func prepareSetBasePacketOutBuilder(ctrl *gomock.Controller, success bool) *client {
ofClient := NewClient(bridgeName, bridgeMgmtAddr, nodeiptest.NewFakeNodeIPChecker(), true, true, false, false, false, false, false, false, false, false, false, false, false, nil, false, defaultPacketInRate)
ofClient := NewClient(bridgeName, bridgeMgmtAddr, nodeiptest.NewFakeNodeIPChecker(), true, true, false, false, false, false, false, false, true, false, false, false, false, false, nil, false, defaultPacketInRate)
m := ovsoftest.NewMockBridge(ctrl)
ofClient.bridge = m
bridge := binding.OFBridge{}
Expand Down Expand Up @@ -2787,8 +2824,8 @@ func Test_client_ReplayFlows(t *testing.T) {
"cookie=0x1020000000000, table=IngressMetric, priority=200,reg0=0x400/0x400,reg3=0xf actions=drop",
)
replayedFlows = append(replayedFlows,
"cookie=0x1020000000000, table=IngressRule, priority=200,conj_id=15 actions=set_field:0xf->reg3,set_field:0x400/0x400->reg0,set_field:0x800/0x1800->reg0,set_field:0x2000000/0xfe000000->reg0,set_field:0x1b/0xff->reg2,group:4",
"cookie=0x1020000000000, table=IngressDefaultRule, priority=200,reg1=0x64 actions=set_field:0x800/0x1800->reg0,set_field:0x2000000/0xfe000000->reg0,set_field:0x400000/0x600000->reg0,set_field:0x1c/0xff->reg2,goto_table:Output",
"cookie=0x1020000000000, table=IngressRule, priority=200,conj_id=15 actions=set_field:0xf->reg3,set_field:0x400/0x400->reg0,set_field:0x800/0x1800->reg0,set_field:0x2000000/0xfe000000->reg0,set_field:0x1c/0xff->reg2,group:4",
"cookie=0x1020000000000, table=IngressDefaultRule, priority=200,reg1=0x64 actions=set_field:0x800/0x1800->reg0,set_field:0x2000000/0xfe000000->reg0,set_field:0x400000/0x600000->reg0,set_field:0x1d/0xff->reg2,goto_table:Output",
)

// Feature Pod connectivity replays flows.
Expand Down
3 changes: 3 additions & 0 deletions pkg/agent/openflow/framework.go
Original file line number Diff line number Diff line change
Expand Up @@ -269,6 +269,9 @@ func (f *featureService) getRequiredTables() []*Table {
if f.enableDSR {
tables = append(tables, DSRServiceMarkTable)
}
if f.proxyLoadBalancerIPs {
tables = append(tables, LoadBalancerSourceRangesTable)
}
return tables
}

Expand Down
14 changes: 11 additions & 3 deletions pkg/agent/openflow/framework_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -36,6 +36,7 @@ var (
defaultOptions = clientOptions{
enableProxy: true,
enableAntreaPolicy: true,
proxyLoadBalancerIPs: true,
proxyAll: false,
connectUplinkToBridge: false,
enableMulticast: false,
Expand Down Expand Up @@ -78,9 +79,10 @@ func newTestFeatureService(options ...clientOptionsFn) *featureService {
fn(&o)
}
return &featureService{
enableAntreaPolicy: o.enableAntreaPolicy,
enableProxy: o.enableProxy,
proxyAll: o.proxyAll,
enableAntreaPolicy: o.enableAntreaPolicy,
enableProxy: o.enableProxy,
proxyAll: o.proxyAll,
proxyLoadBalancerIPs: o.proxyLoadBalancerIPs,
}
}

Expand Down Expand Up @@ -129,6 +131,7 @@ func TestBuildPipeline(t *testing.T) {
ConntrackTable,
ConntrackStateTable,
PreRoutingClassifierTable,
LoadBalancerSourceRangesTable,
SessionAffinityTable,
ServiceLBTable,
EndpointDNATTable,
Expand Down Expand Up @@ -260,6 +263,7 @@ func TestBuildPipeline(t *testing.T) {
ConntrackTable,
ConntrackStateTable,
PreRoutingClassifierTable,
LoadBalancerSourceRangesTable,
SessionAffinityTable,
ServiceLBTable,
EndpointDNATTable,
Expand Down Expand Up @@ -304,6 +308,7 @@ func TestBuildPipeline(t *testing.T) {
ConntrackTable,
ConntrackStateTable,
PreRoutingClassifierTable,
LoadBalancerSourceRangesTable,
SessionAffinityTable,
ServiceLBTable,
EndpointDNATTable,
Expand Down Expand Up @@ -347,6 +352,7 @@ func TestBuildPipeline(t *testing.T) {
ConntrackTable,
ConntrackStateTable,
PreRoutingClassifierTable,
LoadBalancerSourceRangesTable,
SessionAffinityTable,
ServiceLBTable,
EndpointDNATTable,
Expand Down Expand Up @@ -427,6 +433,7 @@ func TestBuildPipeline(t *testing.T) {
ConntrackStateTable,
PreRoutingClassifierTable,
NodePortMarkTable,
LoadBalancerSourceRangesTable,
SessionAffinityTable,
ServiceLBTable,
EndpointDNATTable,
Expand Down Expand Up @@ -474,6 +481,7 @@ func TestBuildPipeline(t *testing.T) {
ConntrackTable,
ConntrackStateTable,
PreRoutingClassifierTable,
LoadBalancerSourceRangesTable,
SessionAffinityTable,
ServiceLBTable,
EndpointDNATTable,
Expand Down
Loading

0 comments on commit 647e567

Please sign in to comment.