Forum Discussion
Derek_Murphy_38
Jun 03, 2011Nimbostratus
agreed re: diagram - working on that still.
My questions are also stemming from taking the admin class about 4 months ago, and trying to remember everything we covered now - so certain things like the in-band management, I've simply forgot how they work. Let me try to clarify.
My original idea was: (which now with inband management being in the channels)..
arx1: gbe1/1 - gbe 1/4 - cabled into core switch 1
arx1: gbe1/5 - gbe 1/8 - cabled into core switch 2
arx1: gbe1/9 - gbe 1/10 - heartbeat
arx1: gbe1/11 - gbe 1/12 - inband management
arx2: gbe1/1 - gbe 1/4 - cabled into core switch 1
arx2: gbe1/5 - gbe 1/8 - cabled into core switch 2
arx2: gbe1/9 - gbe 1/10 - heartbeat
arx2: gbe1/11 - gbe 1/12 - inband management
Now it seems like it would look more like..
arx1: gbe1/1 - gbe 1/5 - cabled into core switch 1
arx1: gbe1/6 - gbe 1/10 - cabled into core switch 2
arx1: gbe1/11 - gbe 1/12 - heartbeat
arx2: gbe1/1 - gbe 1/5 - cabled into core switch 1
arx2: gbe1/6 - gbe 1/10 - cabled into core switch 2
arx2: gbe1/11 - gbe 1/12 - heartbeat
with ports 1 - 10 on each switch tagged with vlan 32 and 114.
From a client getting data perspective.. we'll have the following:
clients = many vlans -> files.domain.com vlan 32 -> netapps vlan 114
clients are all over the world, in different offices, on many different vlans. They are going to access all files over files.domain.com - 10.10.32.50 - vlan 32 for example.
The arx will be serving data from the following:
netapp1 - vlan 114 - 10.10.114.20
netapp2 - vlan 114 - 10.10.114.21
Clients should not be allowed to go to the netapp directly for access. We will configure the shares to only allow arx access, but from a routing standpoint, the architects in the group want to make the storage network non-routable someday in the future allowing only machines that have an IP on the 10.10.114 network to send and receive to the netapps.
My hope is to have the arx be able to access content on the netapps, via the 10.10.114 vlan while serving content to the clients on the 10.10.32 vlan (all the same channel of ports on the ARX). From the sound of it, it seems like this might be something that isn't possible - unless it can work via static routes to domain controllers - cifs, ldap servers - nfs (no nis only ldap for unix here), and ntp servers?
Forgive me if I'm overlooking anything. My previous experience was with a lab environment on an arx 500 so it was a much simpler setup :)
My gut feeling is that it seems like the in-band management and the proxy-IP's are pretty closely related so if anything it would be proxy-ip/inband management on vlan 114, and VIP on vlan 32 (but all ports would need to carry both vlans)?
Regarding any static routes.. vlan 32 is our server network, so any machine that might be a dependency of the ARX (auth etc..) would be in that network. We also have a management network (vlan 44) that we would want the arx's 114 interfaces to be able to route to incase we shut down services on the 32 network (domain controllers for maintenance lets say..)