Skip to content

Issue setting up oauth2-proxy with azure b2c tenant #299

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
shetsu01 opened this issue Apr 7, 2025 · 5 comments
Open

Issue setting up oauth2-proxy with azure b2c tenant #299

shetsu01 opened this issue Apr 7, 2025 · 5 comments

Comments

@shetsu01
Copy link

shetsu01 commented Apr 7, 2025

We have been struggling to setup the oauth2-proxy helm chart on our Azure AKS cluster with Azure B2C integration. We have been getting the below error for all the different combinations we have tried.

ERROR

[2025/04/07 12:52:30] [provider.go:55] Performing OIDC Discovery...
[2025/04/07 12:52:30] [main.go:59] ERROR: Failed to initialise OAuth2 Proxy: error initialising provider: could not create provider data: error building OIDC ProviderVerifier: could not get verifier builder: error while discovery OIDC configuration: failed to discover OIDC configuration: unexpected status "404": The resource you are looking for has been removed, had its name changed, or is temporarily unavailable.

Do you have a sample values file for this setup that you can share or help us in figuring out what could be wrong here

Below is one of the values.yaml file we used

global: {}
# To help compatibility with other charts which use global.imagePullSecrets.
# global:
#   imagePullSecrets:
#   - name: pullSecret1
#   - name: pullSecret2

## Override the deployment namespace
##
namespaceOverride: ""

# Force the target Kubernetes version (it uses Helm `.Capabilities` if not set).
# This is especially useful for `helm template` as capabilities are always empty
# due to the fact that it doesn't query an actual cluster
kubeVersion:

# Oauth client configuration specifics
config:
  # Add config annotations
  annotations: {}
  # OAuth client ID
  clientID: "xxxxxxxx"
  # OAuth client secret
  clientSecret: "xxxxxxxx"
  # Create a new secret with the following command
  # openssl rand -base64 32 | head -c 32 | base64
  # Use an existing secret for OAuth2 credentials (see secret.yaml for required fields)
  # Example:
  # existingSecret: secret
  cookieSecret: "xxxxxxx"
  # The name of the cookie that oauth2-proxy will create
  # If left empty, it will default to the release name
  #  oidc_issuer_url: "https://icscentralalloy.b2clogin.com/icscentralalloy.onmicrosoft.com/B2C_1_alloyflow/v2.0/"
  # provider: "azure"
  pass_user_headers: true
  set_xauthrequest: true
  skip_provider_button: true
  cookieName: ""
  # google: {}
    # adminEmail: xxxx
    # useApplicationDefaultCredentials: true
    # targetPrincipal: xxxx
    # serviceAccountJson: xxxx
    # Alternatively, use an existing secret (see google-secret.yaml for required fields)
    # Example:
    # existingSecret: google-secret
    # groups: []
    # Example:
    #  - group1@example.com
    #  - group2@example.com
  # Default configuration, to be overridden
  configFile: |-
    # provider = azure
    # oidc-issuer-url = "https://icscentralalloy.b2clogin.com/icscentralalloy.onmicrosoft.com/B2C_1_alloyflow/v2.0/"
    # client-id = "xxxxxxx"
    # client-secret = "xxxxxxxx"
    # cookie-secret = "xxxxxxxx"
    # redirect-url = "https://grpc-alloy.blue.us-instelemetry-dev.azure.lnrsg.io/oauth2/callback"
    # login-url = "https://icscentralalloy.b2clogin.com/icscentralalloy.onmicrosoft.com/B2C_1_alloyflow/oauth2/v2.0/authorize"
    # redeem-url = "https://icscentralalloy.b2clogin.com/icscentralalloy.onmicrosoft.com/B2C_1_alloyflow/oauth2/v2.0/token"
    # email_domains = ["*"]
    # upstreams = ["file:///dev/null"]
    # skip_provider_button = true

alphaConfig:
  enabled: false
  # Add config annotations
  annotations: {}
  # Arbitrary configuration data to append to the server section
  serverConfigData: {}
  # Arbitrary configuration data to append to the metrics section
  metricsConfigData: {}
  # Arbitrary configuration data to append
  configData: {}
  # Arbitrary configuration to append
  # This is treated as a Go template and rendered with the root context
  configFile: ""
  # Use an existing config map (see secret-alpha.yaml for required fields)
  existingConfig: ~
  # Use an existing secret
  existingSecret: ~

image:
  repository: "quay.io/oauth2-proxy/oauth2-proxy"
  # appVersion is used by default
  tag: ""
  pullPolicy: "IfNotPresent"
  command: []

# Optionally specify an array of imagePullSecrets.
# Secrets must be manually created in the namespace.
# ref: https://kubernetes.io/docs/concepts/containers/images/#specifying-imagepullsecrets-on-a-pod
imagePullSecrets: []
  # - name: myRegistryKeySecretName

# Set a custom containerPort if required.
# This will default to 4180 if this value is not set and the httpScheme set to http
# This will default to 4443 if this value is not set and the httpScheme set to https
# containerPort: 4180

extraArgs: 
  provider: "oidc"
  azure-tenant: "xxxxxxxx"       # Azure AD B2C tenant ID
  client-id: "xxxxxxx"
  client-secret: "xxxxxxxx"
  scope: "openid profile email"
  oidc-issuer-url: "https://icscentralalloy.b2clogin.com/19452690-09a8-47f3-89dc-335e5fe3ad21/v2.0/"
  # login-url: "https://icscentralalloy.b2clogin.com/icscentralalloy.onmicrosoft.com/b2c_1_alloyflow/oauth2/v2.0/authorize"
  # redeem-url: "https://icscentralalloy.b2clogin.com/icscentralalloy.onmicrosoft.com/b2c_1_alloyflow/oauth2/v2.0/token"
  # redirect-url: "https://grpc-alloy.blue.us-instelemetry-dev.azure.lnrsg.io/oauth2/callback"
  email-domain: "*"                      # Allow all email domains (adjust as needed)
  upstream: "file:///dev/null"           # Placeholder; replace with your app URL if needed
  http-address: "0.0.0.0:4180"
extraEnv: []

envFrom: []
# Load environment variables from a ConfigMap(s) and/or Secret(s)
# that already exists (created and managed by you).
# ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/#configure-all-key-value-pairs-in-a-configmap-as-container-environment-variables
#
# PS: Changes in these ConfigMaps or Secrets will not be automatically
#     detected and you must manually restart the relevant Pods after changes.
#
#  - configMapRef:
#      name: special-config
#  - secretRef:
#      name: special-config-secret

# -- Custom labels to add into metadata
customLabels: {}

# To authorize individual email addresses
# That is part of extraArgs but since this needs special treatment we need to do a separate section
authenticatedEmailsFile:
  enabled: false
  # Defines how the email addresses file will be projected, via a configmap or secret
  persistence: configmap
  # template is the name of the configmap what contains the email user list but has been configured without this chart.
  # It's a simpler way to maintain only one configmap (user list) instead changing it for each oauth2-proxy service.
  # Be aware the value name in the extern config map in data needs to be named to "restricted_user_access" or to the
  # provided value in restrictedUserAccessKey field.
  template: ""
  # The configmap/secret key under which the list of email access is stored
  # Defaults to "restricted_user_access" if not filled-in, but can be overridden to allow flexibility
  restrictedUserAccessKey: ""
  # One email per line
  # example:
  # restricted_access: |-
  #   name1@domain
  #   name2@domain
  # If you override the config with restricted_access it will configure a user list within this chart what takes care of the
  # config map resource.
  restricted_access: ""
  annotations: {}
  # helm.sh/resource-policy: keep

service:
  type: ClusterIP
  # when service.type is ClusterIP ...
  # clusterIP: 192.0.2.20
  # when service.type is LoadBalancer ...
  # loadBalancerIP: 198.51.100.40
  # loadBalancerSourceRanges: 203.0.113.0/24
  # when service.type is NodePort ...
  # nodePort: 80
  portNumber: 4180
  # Protocol set on the service
  appProtocol: http
  annotations: {}
  # foo.io/bar: "true"
  # configure externalTrafficPolicy
  externalTrafficPolicy: ""
  # configure internalTrafficPolicy
  internalTrafficPolicy: ""
  # configure service target port
  targetPort: ""

## Create or use ServiceAccount
serviceAccount:
  ## Specifies whether a ServiceAccount should be created
  enabled: true
  ## The name of the ServiceAccount to use.
  ## If not set and create is true, a name is generated using the fullname template
  name:
  automountServiceAccountToken: true
  annotations: {}

ingress:
  enabled: false
  # className: nginx
  path: /
  # Only used if API capabilities (networking.k8s.io/v1) allow it
  pathType: ImplementationSpecific
  # Used to create an Ingress record.
  # hosts:
    # - chart-example.local
  # Extra paths to prepend to every host configuration. This is useful when working with annotation based services.
  # Warning! The configuration is dependant on your current k8s API version capabilities (networking.k8s.io/v1)
  # extraPaths:
  # - path: /*
  #   pathType: ImplementationSpecific
  #   backend:
  #     service:
  #       name: ssl-redirect
  #       port:
  #         name: use-annotation
  labels: {}
  # annotations:
  #   kubernetes.io/ingress.class: nginx
  #   kubernetes.io/tls-acme: "true"
  # tls:
    # Secrets must be manually created in the namespace.
    # - secretName: chart-example-tls
    #   hosts:
    #     - chart-example.local

resources: {}
  # limits:
  #   cpu: 100m
  #   memory: 300Mi
  # requests:
  #   cpu: 100m
  #   memory: 300Mi

extraVolumes: []
  # - name: ca-bundle-cert
  #   secret:
  #     secretName: <secret-name>

extraVolumeMounts: []
  # - mountPath: /etc/ssl/certs/
  #   name: ca-bundle-cert

# Additional containers to be added to the pod.
extraContainers: []
  #  - name: my-sidecar
  #    image: nginx:latest

# Additional Init containers to be added to the pod.
extraInitContainers: []
  #  - name: wait-for-idp
  #    image: my-idp-wait:latest
  #    command:
  #    - sh
  #    - -c
  #    - wait-for-idp.sh

priorityClassName: ""

# hostAliases is a list of aliases to be added to /etc/hosts for network name resolution
hostAliases: []
# - ip: "10.xxx.xxx.xxx"
#   hostnames:
#     - "auth.example.com"
# - ip: 127.0.0.1
#   hostnames:
#     - chart-example.local
#     - example.local

# [TopologySpreadConstraints](https://kubernetes.io/docs/concepts/workloads/pods/pod-topology-spread-constraints/) configuration.
# Ref: https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/pod-v1/#scheduling
# topologySpreadConstraints: []

# Affinity for pod assignment
# Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
# affinity: {}

# Tolerations for pod assignment
# Ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
tolerations: []

# Node labels for pod assignment
# Ref: https://kubernetes.io/docs/user-guide/node-selection/
nodeSelector: {}

# Whether to use secrets instead of environment values for setting up OAUTH2_PROXY variables
proxyVarsAsSecrets: true

# Configure Kubernetes liveness and readiness probes.
# Ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/
# Disable both when deploying with Istio 1.0 mTLS. https://istio.io/help/faq/security/#k8s-health-checks
livenessProbe:
  enabled: true
  initialDelaySeconds: 0
  timeoutSeconds: 1

readinessProbe:
  enabled: true
  initialDelaySeconds: 0
  timeoutSeconds: 5
  periodSeconds: 10
  successThreshold: 1

# Configure Kubernetes security context for container
# Ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/
securityContext:
  enabled: true
  allowPrivilegeEscalation: false
  capabilities:
    drop:
    - ALL
  readOnlyRootFilesystem: true
  runAsNonRoot: true
  runAsUser: 2000
  runAsGroup: 2000
  seccompProfile:
    type: RuntimeDefault

deploymentAnnotations: {}
podAnnotations: {}
podLabels: {}
replicaCount: 1
revisionHistoryLimit: 10
strategy: {}
enableServiceLinks: true

## PodDisruptionBudget settings
## ref: https://kubernetes.io/docs/concepts/workloads/pods/disruptions/
podDisruptionBudget:
  enabled: true
  minAvailable: 1

## Horizontal Pod Autoscaling
## ref: https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/
autoscaling:
  enabled: false
  minReplicas: 1
  maxReplicas: 10
  targetCPUUtilizationPercentage: 80
#  targetMemoryUtilizationPercentage: 80
  annotations: {}

# Configure Kubernetes security context for pod
# Ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/
podSecurityContext: {}

# whether to use http or https
httpScheme: http

initContainers:
  # if the redis sub-chart is enabled, wait for it to be ready
  # before starting the proxy
  # creates a role binding to get, list, watch, the redis master pod
  # if service account is enabled
  waitForRedis:
    enabled: true
    image:
      repository: "alpine"
      tag: "latest"
      pullPolicy: "IfNotPresent"
    # uses the kubernetes version of the cluster
    # the chart is deployed on, if not set
    kubectlVersion: ""
    securityContext:
      enabled: true
      allowPrivilegeEscalation: false
      capabilities:
        drop:
          - ALL
      readOnlyRootFilesystem: true
      runAsNonRoot: true
      runAsUser: 65534
      runAsGroup: 65534
      seccompProfile:
        type: RuntimeDefault
    timeout: 180
    resources: {}
      # limits:
      #   cpu: 100m
      #   memory: 300Mi
      # requests:
      #   cpu: 100m
      #   memory: 300Mi

# Additionally authenticate against a htpasswd file. Entries must be created with "htpasswd -B" for bcrypt encryption.
# Alternatively supply an existing secret which contains the required information.
htpasswdFile:
  enabled: false
  existingSecret: ""
  entries: []
  # One row for each user
  # example:
  # entries:
  #  - testuser:$2y$05$gY6dgXqjuzFhwdhsiFe7seM9q9Tile4Y3E.CBpAZJffkeiLaC21Gy

# Configure the session storage type, between cookie and redis
sessionStorage:
  # Can be one of the supported session storage cookie|redis
  type: cookie
  redis:
    # Name of the Kubernetes secret containing the redis & redis sentinel password values (see also `sessionStorage.redis.passwordKey`)
    existingSecret: ""
    # Redis password value. Applicable for all Redis configurations. Taken from redis subchart secret if not set. `sessionStorage.redis.existingSecret` takes precedence
    password: ""
    # Key of the Kubernetes secret data containing the redis password value. If you use the redis sub chart, make sure
    # this password matches the one used in redis.global.redis.password (see below).
    passwordKey: "redis-password"
    # Can be one of standalone|cluster|sentinel
    clientType: "standalone"
    standalone:
      # URL of redis standalone server for redis session storage (e.g. `redis://HOST[:PORT]`). Automatically generated if not set
      connectionUrl: ""
    cluster:
      # List of Redis cluster connection URLs. Array or single string allowed.
      connectionUrls: []
      # - "redis://127.0.0.1:8000"
      # - "redis://127.0.0.1:8001"
    sentinel:
      # Name of the Kubernetes secret containing the redis sentinel password value (see also `sessionStorage.redis.sentinel.passwordKey`). Default: `sessionStorage.redis.existingSecret`
      existingSecret: ""
      # Redis sentinel password. Used only for sentinel connection; any redis node passwords need to use `sessionStorage.redis.password`
      password: ""
      # Key of the Kubernetes secret data containing the redis sentinel password value
      passwordKey: "redis-sentinel-password"
      # Redis sentinel master name
      masterName: ""
      # List of Redis cluster connection URLs. Array or single string allowed.
      connectionUrls: []
      # - "redis://127.0.0.1:8000"
      # - "redis://127.0.0.1:8001"

# Enables and configure the automatic deployment of the redis subchart
redis:
  # provision an instance of the redis sub-chart
  enabled: false
  # Redis specific helm chart settings, please see:
  # https://github.com/bitnami/charts/tree/master/bitnami/redis#parameters
  # global:
  #   redis:
  #     password: yourpassword
  # If you install Redis using this sub chart, make sure that the password of the sub chart matches the password
  # you set in sessionStorage.redis.password (see above).
  # redisPort: 6379
  # architecture: standalone

# Enables apiVersion deprecation checks
checkDeprecation: true

# Allows graceful shutdown
# terminationGracePeriodSeconds: 65
# lifecycle:
#   preStop:
#     exec:
#       command: [ "sh", "-c", "sleep 60" ]

metrics:
  # Enable Prometheus metrics endpoint
  enabled: true
  # Serve Prometheus metrics on this port
  port: 44180
  # when service.type is NodePort ...
  # nodePort: 44180
  # Protocol set on the service for the metrics port
  service:
    appProtocol: http
  serviceMonitor:
    # Enable Prometheus Operator ServiceMonitor
    enabled: false
    # Define the namespace where to deploy the ServiceMonitor resource
    namespace: ""
    # Prometheus Instance definition
    prometheusInstance: default
    # Prometheus scrape interval
    interval: 60s
    # Prometheus scrape timeout
    scrapeTimeout: 30s
    # Add custom labels to the ServiceMonitor resource
    labels: {}

    ## scheme: HTTP scheme to use for scraping. Can be used with `tlsConfig` for example if using istio mTLS.
    scheme: ""

    ## tlsConfig: TLS configuration to use when scraping the endpoint. For example if using istio mTLS.
    ## Of type: https://github.com/coreos/prometheus-operator/blob/master/Documentation/api.md#tlsconfig
    tlsConfig: {}

    ## bearerTokenFile: Path to bearer token file.
    bearerTokenFile: ""

    ## Used to pass annotations that are used by the Prometheus installed in your cluster to select Service Monitors to work with
    ## ref: https://github.com/coreos/prometheus-operator/blob/master/Documentation/api.md#prometheusspec
    annotations: {}

    ## Metric relabel configs to apply to samples before ingestion.
    ## [Metric Relabeling](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#metric_relabel_configs)
    metricRelabelings: []
    # - action: keep
    #   regex: 'kube_(daemonset|deployment|pod|namespace|node|statefulset).+'
    #   sourceLabels: [__name__]

    ## Relabel configs to apply to samples before ingestion.
    ## [Relabeling](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#relabel_config)
    relabelings: []
    # - sourceLabels: [__meta_kubernetes_pod_node_name]
    #   separator: ;
    #   regex: ^(.*)$
    #   targetLabel: nodename
    #   replacement: $1
    #   action: replace

# Extra K8s manifests to deploy
extraObjects: []
  # - apiVersion: secrets-store.csi.x-k8s.io/v1
  #   kind: SecretProviderClass
  #   metadata:
  #     name: oauth2-proxy-secrets-store
  #   spec:
  #     provider: aws
  #     parameters:
  #       objects: |
  #         - objectName: "oauth2-proxy"
  #           objectType: "secretsmanager"
  #           jmesPath:
  #               - path: "client_id"
  #                 objectAlias: "client-id"
  #               - path: "client_secret"
  #                 objectAlias: "client-secret"
  #               - path: "cookie_secret"
  #                 objectAlias: "cookie-secret"
  #     secretObjects:
  #     - data:
  #       - key: client-id
  #         objectName: client-id
  #         - key: client-secret
  #           objectName: client-secret
  #         - key: cookie-secret
  #         objectName: cookie-secret
  #       secretName: oauth2-proxy-secrets-store
  #       type: Opaque
@tuunit
Copy link
Member

tuunit commented Apr 7, 2025

Hi @shetsu01,

the azure provider has been deprecated please try the new entra id provider.

https://oauth2-proxy.github.io/oauth2-proxy/configuration/providers/ms_entra_id/

@eluchsinger
Copy link

I'm getting the same issue with provider = "entra-id"

@tuunit
Copy link
Member

tuunit commented Apr 27, 2025

@shetsu01 the error message is not from oauth2-proxy. If you visit the oidc url you specific:
https://icscentralalloy.b2clogin.com/19452690-09a8-47f3-89dc-335e5fe3ad21/v2.0/

You get:

The resource you are looking for has been removed, had its name changed, or is temporarily unavailable.

Exactly what oauth2-proxy is reporting back to you in the logs.

@tuunit
Copy link
Member

tuunit commented Apr 27, 2025

@tuunit
Copy link
Member

tuunit commented Apr 27, 2025

Furthmore, you should not share your client id or cookie secret. Its called cookie secret for a reason ;)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants