Home

  • TAP 1.6 – Tanzu Developer Portal Configurator

    TAP 1.6 – Tanzu Developer Portal Configurator

    Backstage is an amazing CNCF Project which is the base for the Tanzu Developer Portal (TDP) which was previously known as TAP GUI.

    While since the initial GA of TAP, we have had a great portal, which has been enhanced with every release, the portal has not been able to take full advantage of the power of backstage.

    Backstage is a highly extensible project, which is built on a plugin based model, and in the open source backstage world, more then 150 plugins have been published, allowing for integrations with many common tools present in customer environments.

    Now with TAP 1.6, we have a new component called the Tanzu Developer Portal Configurator tool, which enables you to add plugins to Tanzu Developer Portal, turning it into a customized portal!

    While on one hand, TDP till now has been very locked down, the experience of integrating plugins in OSS Backstage is very tedious, and is not for the light hearted, The TDP Configurator tool, is a great step in the direction of making integrating both third party plugins as well as custom in house built plugins a much more maintainable and simple task.

    The TDP Configurator takes the list of the plugins that you want to add into your portal. With that list, TDP Configurator generates a developer portal customized to your specifications.

    The end result of the configurator, is an OCI image which can be referenced when deploying TAP and configured the same as with the OOTB TDP image, to provide a great experience with your own custom plugin setup to meet your organizations needs.

    How Plugins Get Added

    One of the challenges around OSS backstage, and the integration of plugins, is that for every plugin you need to manually edit the code base of backstage itself to include your plugins.

    This process is tedious and error prone, and the TDP configurator helps in this regard by introducing the concept of surfaces, and plugin wrappers.

    A surface is a discrete capability that a plug-in provides. This can include:

    • The ability to show up on the sidebar
    • The ability to be accessed at a URL, such as https://YOUR_PORTAL_URL/plugin
    • The ability to show up as a Catalog Overview tab

    Basically with a wrapper, we have a defined specification where we can define how a plugin should be visualized within the portal itself.

    A wrapper is a method of exposing a plugin’s surfaces to the TDP Configurator so that the plugin can be integrated into the portal.

    A wrapper imports a reference to the underlying plugin and defines the surfaces that the plugin should expose.

    While the way to build wrapper plugins is out of scope of this blog post, keep your eyes open for some more content coming soon with some real world examples of how to do it!

    Building a Custom Portal

    The mechanism for building a custom portal actually uses TAP itself, which is pretty cool!

    The general flow is that you need to create a configuration file where you define the backend and frontend plugins you want to have added to your portal, which are themselves wrapper plugins, as discussed above.

    A sample config file may look like this:

    app:
      plugins:
        - name: '@tpb/plugin-hello-world'
          version: '^1.6.0-release-1.6.x.1'
    backend:
      plugins:
        - name: '@tpb/plugin-hello-world-backend'
          version: '^1.6.0-release-1.6.x.1'
    

    As can be seen above, we are adding one backend and one frontend plugin to our portal which are already created for us by the TAP team.

    These plugins are available within the configurator tool itself, which makes it extremely easy to get started and see the value of the tool!

    Once we have this config file we need to base64 encode it and then pass that in to our workload. This can be done using the following command:

    TDP_CONFIG_FILE_CONTENT=$(cat tdp-config.yaml | base64 -w0)
    

    The other piece of data we need is the image reference of the builder image itself which can be retrieved with the following command:

    TDP_CONFIGURATOR_IMAGE=`imgpkg pull -b $(kubectl get package -n tap-install tpb.tanzu.vmware.com.0.1.2 -o json | jq -r .spec.template.spec.fetch[0].imgpkgBundle.image) -o /tmp/tpb-bundle && yq e '.images[0].image' /tmp/tpb-bundle/.imgpkg/images.yml`
    

    With those 2 pieces of data we can now create our workload manifest:

    cat <<EOF > tdp-workload.yaml
    apiVersion: carto.run/v1alpha1
    kind: Workload
    metadata:
      name: tdp-configurator
      labels:
        apps.tanzu.vmware.com/workload-type: web
        app.kubernetes.io/part-of: tdp-configurator
    spec:
      build:
        env:
          - name: BP_NODE_RUN_SCRIPTS
            value: 'set-tpb-config,portal:pack'
          - name: TPB_CONFIG
            value: /tmp/tpb-config.yaml
          - name: TPB_CONFIG_STRING
            value: $TDP_CONFIG_FILE_CONTENT
      source:
        image: $TDP_CONFIGURATOR_IMAGE
        subPath: builder
    EOF
    

    And now you can simply apply this workload to your cluster. Once the workload finishes the image building step which is performed by TBS, you will need to retrieve the new image URI using the following command:

    NEW_TDP_IMAGE=$(kubectl get cnbimage tdp-configurator -o json | jq -r .status.latestImage)
    

    And with that value we need to create a YTT overlay secret which we will apply to our TAP GUI package, in order to configure it to use our custom portal.

    cat <<EOF | kubectl apply -f -
    apiVersion: v1
    kind: Secret
    metadata:
      name: tdp-app-image-overlay-secret
      namespace: tap-install
    stringData:
      tpb-app-image-overlay.yaml: |
        #@ load("@ytt:overlay", "overlay")
    
        #! makes an assumption that tap-gui is deployed in the namespace: "tap-gui"
        #@overlay/match by=overlay.subset({"kind": "Deployment", "metadata": {"name": "server", "namespace": "tap-gui"}}), expects="1+"
        ---
        spec:
          template:
            spec:
              containers:
                #@overlay/match by=overlay.subset({"name": "backstage"}),expects="1+"
                #@overlay/match-child-defaults missing_ok=True
                - image: $NEW_TDP_IMAGE
                #@overlay/replace
                  args:
                  - -c
                  - |
                    export KUBERNETES_SERVICE_ACCOUNT_TOKEN="\$(cat /var/run/secrets/kubernetes.io/serviceaccount/token)"
                    exec /layers/tanzu-buildpacks_node-engine/node/bin/node portal/dist/packages/backend  \\
                    --config=portal/app-config.yaml \\
                    --config=portal/runtime-config.yaml \\
                    --config=/etc/app-config/app-config.yaml
    EOF
    

    And the final step is to add this overlay to our tap values and to apply the changes.

    The section we need in our TAP values file will look like:

    package_overlays:
      - name: tap-gui
        secrets:
          - name: tdp-app-image-overlay-secret
    

    once you update TAP with the new values file, we can access TAP and see a new icon on the left side menu for our newly added Hello World plugin!

    Summary

    While the process is still not 100% streamlined for adding new plugins, and it does require some typescript and backstage knowledge, the fact that this is now possible in it of itself is extremely promissing!

    I’m truly excited to see the UX and feature set of this new component evolve over time, and I’m looking forward to being able to enhance our customers environments with the plugins relevant to their environments and needs, making the Tanzu Developer Portal truly a one stop shop for their developers!

  • TAP 1.6 – What’s New

    TAP 1.6 – What’s New

    TAP 1.6 is a huge release with some really awesome new features, both brand new components, as well as improving the experience around existing ones.

    In this series of posts, we will take a look at some of the key new features in TAP 1.6 including:

    Brand New Components:

    1. Local Source Proxy
    2. Tanzu Developer Portal Configurator
    3. Artifact Metadata Repository Observer
    4. CVE Triage flow via Tanzu insight CLI

    Improved Features

    1. Supply Chain Backstage Plugin Improvements
    2. Crossplane Upgrade
    3. New Bitnami Services
    4. AppSSO Improvements
    5. Tanzu CLI Management Improvements
    6. TBS Improvements
    7. App Live View Improvements
    8. GitOps Installation using Vault
    9. App Scanning 2.0
    10. Namespace Provisioner Improvements
    11. Metadata Store Improvements
    12. IDE Plugin Improvements

    This is only a partial list and many other updates have been made, but these are the key ones that I believe are truly impressive to see.

    It never stops to amaze me, the speed at which VMware are pushing in these new features in TAP, making it a truly amazing one of a kind offering!

  • Monitoring TAP With Prometheus And Grafana

    Monitoring TAP With Prometheus And Grafana

    In the Kubernetes world, Prometheus and Grafana are de-facto standards for monitoring.

    TAP is a truly amazing platform, but one of the main areas it currently lacks, is a good monitoring story. While some of the components used within TAP have metrics endpoints, many of them don’t, and even those that do, correlating between one another is not a simple task.

    Recently when working on some upstream work regarding ClusterAPI i became aware of a great new feature in the commonly used Prometheus exporter called Kube State Metrics (KSM).

    KSM is typically used to pull data from the status fields and specs of common kubernetes resources, and to convert them into metrics which can be scraped by prometheus.

    KSM now includes the ability to define a config file with the desired mapping of resource fields to metric labels and values for any Custom Resources, and it will be able to provide us metrics for them as well!

    With this in mind, I decided to perform a small POC of building out a monitoring suite for TAP, using KSM.

    For this setup i decided to use the kube-prometheus-stack helm chart which makes deploying a fully functional Prometheus and Grafana stack as easy as it gets, and then extend the configuration to support the TAP CRDs, and generate the needed metrics.

    For this initial POC i decided to focus on a subset of the TAP resources.

    The resource types and corresponding prometheus metrics being monitored are:

    • workloads – cartographer_workload_info , cartographer_workload_status
    • deliverables – cartographer_deliverable_info , cartographer_deliverable_status
    • service bindings – service_binding_info, service_binding_status
    • cluster instance classes – stk_cluster_instance_class_composition_selector, stk_cluster_instance_class_status
    • class claims – stk_class_claim_info , stk_class_claim_status
    • resource claims – stk_resource_claim_info, stk_resource_claim_status
    • knative services – knative_service_info, knative_service_status
    • knative revisions – knative_revision_info, knative_revision_status
    • kapp controller package repositories – carvel_packagerepository_info
    • kapp controller package installations – carvel_packageinstall_info
    • kapp controller apps – carvel_app_info, carvel_app_namespaces
    • api descriptors – api_descriptor_info, api_descriptor_status
    • tekton pipeline runs – tekton_pipeline_run_info, tekton_pipeline_run_status
    • tekton task runs – tekton_task_run_info, tekton_task_run_status
    • accelerators – accelerator_info, accelerator_imports_info, accelerator_status
    • fragments – accelerator_fragment_info, accelerator_fragment_status
    • flux git repositories – flux_git_repository_info, flux_git_repository_status
    • image scans – scst_image_scan_info, scst_image_scan_status
    • source scans – scst_source_scan_info, scst_source_scan_status
    • kpack images – kpack_image_info, kpack_image_status
    • kpack builds – kpack_build_info, kpack_build_involved_buildpacks, kpack_build_status

    With these resources and metrics defined I was able to create a simple yet very powerful dashboard in Grafana for visualizing the state of my TAP environment.

    First we can show the status of out package installations

    We can then show the status of our workloads and some data about them, including which workloads are utilizing the live update and remote debugging features which is a great piece of data to see how your developers are utilizing the platform!

    We can then show details about our Flux Source Controller resources

    And then we can deep dive into TBS metrics including things like how many workloads are using different buildpacks!

    We can also show image scanning results and statistics

    We can also show details about Knative configurations as well

    We then can look into application accelerators and their status in our TAP environment, as well as API descriptors which are registered with TAP GUI

    CD is also important so we can also see the details and metrics about our deliverables

    And finally we can dig into the service bindings and Services Toolkit resources providing backing services to our workloads!

    As you can tell, with just this YAMl configuration, and without a single line of code, the options are endless!!

    While this is currently in a POC state, I believe it shows a true potential and already provides value when operating a TAP environment.

    For those interested in how to get this running in your environment, you can take a look at my Github repository with the Kube State Metrics configuration. While the dashboard is not available today, it should be easy for anyone to build based off of the metrics they care about.

  • Deploying TAP on TKGi

    Deploying TAP on TKGi

    Recently i was working on a deployment of TAP for a customer on top of a few TKGi clusters.

    While TAP works on any conformant kubernetes cluster, as I have said many times, Kubernetes is not 100% cloud agnostic, and every distribution can have some weird quirks.

    When deploying to TKGi a few of these quirks came up, and in this post we will discuss the quirks, and how to solve them.

    Docker issue

    One thing that is important to validate is that you have performed the migration to Containerd from Docker on your TKGi clusters ahead of installing TAP.

    In most cases this should happen automatically when moving to a recent version of TKGi but if for some reason you are still on Docker, be warned that the installation will not succeed. TAP does not work on Docker based container runtimes in kubernetes and requires the use of Containerd in order to function properly.

    Contour Issue

    When we deployed TAP onto the clusters, we had an issue that the envoy pods of Contour simply would not start.

    After investigating this issue together with GSS, the issue was found to be that the Contour package provided in TAP, does not work out of the box with clusters that have their nodes configured to only support IPv4 networking and have disabled at the node level, IPv6 networking.

    This was a change made in the Tanzu packaging of Contour, where as of TAP 1.3, the behavior of Contour was changed and it’s defaulting to IPv6 with IPv4 compatibility now.

    When debugging the issue, in the envoy pod logs we found the following:

    [2023-03-16 12:05:57.334][1][info][upstream] [source/common/upstream/cds_api_helper.cc:35] cds: add 10 cluster(s), remove 2 cluster(s)
    [2023-03-16 12:05:57.334][1][info][upstream] [source/common/upstream/cds_api_helper.cc:72] cds: added/updated 0 cluster(s), skipped 10 unmodified cluster(s)
    [2023-03-16 12:15:09.584][1][warning][config] [source/common/config/grpc_subscription_impl.cc:126] gRPC config for type.googleapis.com/envoy.config.listener.v3.Listener rejected: Error adding/updating listener(s) ingress_http: malformed IP address: ::
    ingress_https: malformed IP address: ::
    stats-health: malformed IP address: ::
    

    To solve this, we need to add a simple overlay which will change the flags passed to the Contour deployment, and switch it to use IPv4 instead of IPv6.

    The first step is to create a secret with the overlay like bellow:

    apiVersion: v1
    kind: Secret
    metadata:
      name: ipv4-overlay
      namespace: tap-install
    stringData:
      ipv4-overlay.yaml: |
        #@ load("@ytt:overlay", "overlay")
        #@overlay/match by=overlay.subset({"metadata":{"name":"contour"}, "kind": "Deployment"})
        ---
        spec:
          template:
            spec:
              containers:
                #@overlay/match by="name"
                - name: contour
                  #@overlay/replace
                  args:
                  - serve
                  - --incluster
                  - '--xds-address=0.0.0.0'
                  - --xds-port=8001
                  - '--stats-address=0.0.0.0'
                  - '--http-address=0.0.0.0'
                  - '--envoy-service-http-address=0.0.0.0'
                  - '--envoy-service-https-address=0.0.0.0'
                  - '--health-address=0.0.0.0'
                  - --contour-cafile=/certs/ca.crt
                  - --contour-cert-file=/certs/tls.crt
                  - --contour-key-file=/certs/tls.key
                  - --config-path=/config/contour.yaml
    

    Next we need to update our TAP values file to instruct TAP to use this overlay and apply it to the contour package.

    This can easily be done by using the package_overlays section in our TAP values and adding a snippet like bellow:

    package_overlays:
    - name: contour
      secrets:
      - name: ipv4-overlay
    

    This will solve the issue, and once applied to the cluster, the envoy pods will enter into a running state and contour will successfully deploy as expected.

    Source Testing Issue

    On TKGi when using NCP as the CNI, there are some quirks one can encounter. One of these quirks is related to the fact that NCP syncs labels from pods into NSX tags.

    The issue seems to be with labels that have a value of "true" such as:

    apps.tanzu.vmware.com/auto-configure-actuators: "true"
    apps.tanzu.vmware.com/has-tests: "true"
    

    With these labels, NCP seems to not be able to create the tag and as such does not finish networking config of the pod, causing the images to get stuck in an initialization phase indefinitely.

    This issue is odd, but can easily be fixed by adding a simple overlay to remove these labels from the pods before they are created.

    The first step is to create a secret as follows:

    apiVersion: v1
    kind: Secret
    metadata:
      name: testing-template-labels-overlay
      namespace: tap-install
    type: Opaque
    data:
      testing-template-labels-overlay.yaml: |
        #@ load("@ytt:overlay","overlay")
        
        #@ def testing_template_matcher():
        apiVersion: carto.run/v1alpha1
        kind: ClusterSourceTemplate
        metadata:
          name: testing-pipeline
        #@ end
    
        #@overlay/match by=overlay.subset(testing_template_matcher())
        ---
        spec:
          ytt: |
            #@ load("@ytt:data", "data")
            #@ load("@ytt:overlay", "overlay")
    
            #@ def merge_labels(fixed_values):
            #@   labels = {}
            #@   if hasattr(data.values.workload.metadata, "labels"):
            #@     labels.update(data.values.workload.metadata.labels)
            #@   end
            #@   labels.update(fixed_values)
            #@   return labels
            #@ end
            
            #@ def bad_labels():
            #@   if/end hasattr(data.values.workload.metadata.labels, "apps.tanzu.vmware.com/has-tests"):
            #@overlay/remove
            apps.tanzu.vmware.com/has-tests: "true"
            #@   if/end hasattr(data.values.workload.metadata.labels, "apps.tanzu.vmware.com/auto-configure-actuators"):
            #@overlay/remove missing_ok=True
            apps.tanzu.vmware.com/auto-configure-actuators: "true"
            #@ end
    
            #@ def merged_tekton_params():
            #@   params = []
            #@   if hasattr(data.values, "params") and hasattr(data.values.params, "testing_pipeline_params"):
            #@     for param in data.values.params["testing_pipeline_params"]:
            #@       params.append({ "name": param, "value": data.values.params["testing_pipeline_params"][param] })
            #@     end
            #@   end
            #@   params.append({ "name": "source-url", "value": data.values.source.url })
            #@   params.append({ "name": "source-revision", "value": data.values.source.revision })
            #@   return params
            #@ end
            ---
            apiVersion: carto.run/v1alpha1
            kind: Runnable
            metadata:
              name: #@ data.values.workload.metadata.name
              labels: #@ overlay.apply(merge_labels({ "app.kubernetes.io/component": "test" }),bad_labels())
            spec:
              #@ if/end hasattr(data.values.workload.spec, "serviceAccountName"):
              serviceAccountName: #@ data.values.workload.spec.serviceAccountName
    
              runTemplateRef:
                name: tekton-source-pipelinerun
                kind: ClusterRunTemplate
    
              selector:
                resource:
                  apiVersion: tekton.dev/v1beta1
                  kind: Pipeline
    
                #@ not hasattr(data.values, "testing_pipeline_matching_labels") or fail("testing_pipeline_matching_labels param is required")
                matchingLabels: #@ data.values.params["testing_pipeline_matching_labels"] or fail("testing_pipeline_matching_labels param cannot be empty")
    
              inputs:
                tekton-params: #@ merged_tekton_params()
    

    Once you apply this secret we simply need to use the package_overlays section in the TAP values file to instruct TAP to apply this change to the OOTB Templates package:

    package_overlays:
    - name: ootb-templates
      secrets:
      - name: testing-template-labels-overlay
    

    Once this is applied and TAP reconciles, your testing pods will work as expected.

    Prisma Scanner issue

    For this customer we are using the Prisma scanner and there too we encountered the Label issue, as well as another issue regarding security context configuration.

    These issues are easily fixable with 2 steps.

    As the prisma package is not part of the TAP installation it will be fixed in one step, and the label issue will be fixed in a different step.

    To fix the labels issue, we can create the following overlay secret:

    apiVersion: v1
    kind: Secret
    metadata:
      name: scan-stamping-labels-overlay
      namespace: tap-install
    type: Opaque
    data:
      scan-stamping-labels-overlay.yaml: |
        #@ load("@ytt:overlay","overlay")
        #@ def scan_template_matcher():
        apiVersion: carto.run/v1alpha1
        kind: ClusterSourceTemplate
        metadata:
          name: source-scanner-template
        #@ end
        #@overlay/match by=overlay.subset(scan_template_matcher())
        ---
        spec:
          ytt: |
            #@ load("@ytt:data", "data")
            #@ load("@ytt:overlay", "overlay")
            #@ def merge_labels(fixed_values):
            #@   labels = {}
            #@   if hasattr(data.values.workload.metadata, "labels"):
            #@     labels.update(data.values.workload.metadata.labels)
            #@   end
            #@   labels.update(fixed_values)
            #@   return labels
            #@ end
            
            #@ def bad_labels():
            #@ if/end hasattr(data.values.workload.metadata.labels, "apps.tanzu.vmware.com/has-tests"):
            #@overlay/remove
            apps.tanzu.vmware.com/has-tests: "true"
            #@ if/end hasattr(data.values.workload.metadata.labels, "apps.tanzu.vmware.com/auto-configure-actuators"):
            #@overlay/remove missing_ok=True
            apps.tanzu.vmware.com/auto-configure-actuators: "true"
            #@ end
            ---
            apiVersion: scanning.apps.tanzu.vmware.com/v1beta1
            kind: SourceScan
            metadata:
              name: #@ data.values.workload.metadata.name
              labels: #@ overlay.apply(merge_labels({ "app.kubernetes.io/component": "source-scan" }),bad_labels())
            spec:
              blob:
                url: #@ data.values.source.url
                revision: #@ data.values.source.revision
              scanTemplate: #@ data.values.params.scanning_source_template
              #@ if data.values.params.scanning_source_policy != None and len(data.values.params.scanning_source_policy) > 0:
              scanPolicy: #@ data.values.params.scanning_source_policy
              #@ end
    

    We can now use the package_overlays section in the TAP values file to apply these changes:

    package_overlays:
    - name: ootb-templates
      secrets:
      - name: scan-stamping-labels-overlay
    

    As mentioned above, we also need to update the Prisma package installation.

    In this case, as prisma is installed not as part of the TAP installation itself we need to apply the overlay ourselves on the package installation.

    First we need to create the overlay secret:

    apiVersion: v1
    kind: Secret
    metadata:
      name: prisma-sec-context-overlay
      namespace: tap-install
    type: Opaque
    stringData:
      prisma-sec-context-overlay.yaml: |
        #@ load("@ytt:overlay","overlay")
        ---
        #@ def st_matcher():
        apiVersion: scanning.apps.tanzu.vmware.com/v1beta1
        kind: ScanTemplate
        #@ end
        #@overlay/match by=overlay.subset(st_matcher()), expects="1+"
        ---
        spec:
          template:
            #@overlay/match missing_ok=True
            #@overlay/remove
            securityContext:
              runAsNonRoot: true 
    

    We can now apply this overlay to our prisma package installation with the following command:

    kubectl annotate pkgi -n tap-install prisma ext.packaging.carvel.dev/ytt-paths-from-secret-name.0=prisma-sec-context-overlay
    

    Summary

    While there were indeed some issues encountered with TAP on TKGi, overall with just a few overlays, we can get this working end to end pretty easily. Understanding the mechanisms and intricacies of YTT overlays and Carvel packaging is indeed a steep learning curve, but once you get a hold of it, it is extremely powerful and an amazing toolset to have at your fingertips.

  • TAP 1.5 – IDE Plugin Enhancements

    TAP 1.5 – IDE Plugin Enhancements

    TAP IDE Plugins

    In TAP one of the key elements and interaction points with the platform, are the IDE plugins which currently exist for 3 IDES:

    1. Visual Studio Code
    2. IntelliJ
    3. Visual Studio

    These plugins are the key interaction point that developers utilize on a day to day basis, in order to develop their applications with TAP enabled kubernetes clusters.

    In every release of TAP, many advances are made in these plugins, making them more and more capable, feature rich, and user friendly.

    Let’s take a look at the key new features in TAP 1.5 for the IDE plugins.

    Visual Studio Code Extensions

    In visual studio code, we have 2 extensions available for TAP:

    1. Developer Tools
    2. Application Accelerator

    The Developer tools, is the extension which allows us to utilize the key features for inner loop development which includes live update functionality, and remote debugging of Java applications.

    The application accelerator extension, brings the app accelerator catalog from TAP GUI, directly into our IDE, to allow developers to bootstrap new applications, directly from their IDE!

    In TAP 1.5, the major changes were made to the Developer Tools plugin, so lets dig into it and see what the changes are!

    Developer Tools Enhancements

    This extension got a bunch of really awesome new features in this release.

    1. A new Tanzu Activity panel was added
    2. Multi namespace visibility
    3. Tanzu quick actions are available in the workload panel

    Lets see what these enhancements provide.

    Tanzu Activity Panel

    TAP is an amazing platform, and the user interface of a workload YAML, is truly awesome. One of the key selling points and benefits of TAP, is that the developer really does not need to be a kubernetes expert in order to use the platform, as TAP deals with all the underlying steps, and manifest generation etc.

    While this is true in most cases, we all know that errors can happen, and getting an understanding of what is happening under the hoods, is also a really important thing.

    Previously, if a developer wanted to understand what resources were created, what failed, what the logs of the failed step are, what the error is, etc., would need to use the terminal and start using Tanzu and kubectl commands.

    With this release of the extension, we now have a new panel in the IDE called the activity panel, which shows us the entire resource hierarchy of our workloads, in a clear an easily understandable way. It also allows us to see where the errors if there are any are to be found, and allows us to get the logs, describe a resource, and receive the error message itself from any of the relevant resources.

    Lets see what this looks like:

    You can receive a list of all the running workloads: Alt

    You can then drill down and see the 3 sections the resources are broken up into:

    Running Application:

    Alt

    Supply Chain:

    Alt

    Delivery:

    Alt

    On any of these resources, we can also click and get the action menu:

    Alt

    This is a huge improvement, and makes debugging when needed a breeze!

    Multi Namespace Visibility

    Till TAP 1.5, workloads that were shown in the Workloads panel, and had errors visible in the Tanzu Debug panel, all were visualized from a single namespace, which is the namespace you were targeting via your kubeconfig.

    While this in it of itself was already very nice, in TAP 1.5, we now can visualize workloads across multiple namespaces!

    Lets see what this looks like.

    In the Tanzu Workload panel, you can click the three dots at the tops and select the option to select namespaces: Alt

    This will open up an action bar at the top of the window where you can select a set of namespaces you want to view: Alt

    From this point on, workloads in any of the selected namespaces, are visualized for you automatically when you open up VSCode!

    This brings a much wider perspective to the developer directly in their IDE, again enhancing the Developer experience.

    Quick Actions From The Workloads Panel

    Previously, actions such as starting live update, deleting a workload, starting remote debugging and applying a workload, all were done in the file viewer under the relevant workspace. With TAP 1.5, we can now run any of these actions also from the Tanzu Workloads panel, again just making it easier for the developer to run what they need when then need it, without jumping around to find exactly where to click!

    Alt

    IntelliJ Extension Enhancements

    In IntelliJ, the major changes that were made are:

    1. A new Application Accelerator Plugin
    2. Multi Namespace Visibility

    New Application Accelerator Plugin

    Regarding the new Application Accelerator plugin, I have written a dedicated blog, regarding the new functionality in the application accelerator component of TAP with details of the new plugin and the UX, but here I will just mention, that we now have this plugin available to us, bridging the Gap between the VSCode experience and the IntelliJ experience, and bringing a truly amazing developer experience directly to the IDE, from the initial phase of project generation, all the way through deployment to kubernetes, and remote debugging of our apps on kubernetes.

    Multi Namespace Visibility

    This addition, similar to the enhancement in the VSCode extension, allows us to visualize workloads across multiple namespaces at the same time. This is a great enhancement, making the UX for developers much smoother.

    From the current Tanzu Activity Panel, one ca now click on the settings icon and select the new "Select Namespaces" option: Alt

    We then will be presented with a popup list, where you can select the namespaces you want to see workloads from:

    Alt

    As can be seen, the UX is very intuitive, and this small but powerful feature makes the developer experience much smoother in real world scenarios.

    Summary

    As you can see, the enhancements in the IDE plugins, makes the end user interface with TAP, much smoother, and user friendly, helping in making TAP the ultimate DevEx platform!

    I’m truly excited to see where the extensions move forwards in the upcoming releases!

  • TAP 1.5 – Application Configuration Service

    TAP 1.5 – Application Configuration Service

    What Is ACS

    ACS or Application Configuration Service, is a new optional component added in TAP 1.5. ACS, is a Kubernetes native replacement for the Spring Cloud Config Server, which was an essential part of a Spring Cloud based microservices based architecture, heavily used by Spring based workloads in the Cloud Foundry and Azure Spring Apps world.

    Spring Cloud Config Server helped by enabling setting runtime config for spring apps, in Git repositories. These configs could be stored on different branches and in directories that could be used to generate runtime configuration properties for applications running in multiple environments.

    ACS is compatible with the existing Git repository configuration management approach, and offers a Kubernetes native integration layer, making the configuration fit much more seamlessly, with application when deployed to TAP. ACS filters runtime configuration for any application by using slices that produce secrets, which can the be bound using Service Claims, like all other external configurations or service binding in a TAP environment.

    When ACS Should be used

    ACS is a great replacement for any usage you may currently have of Spring Cloud Config Server, and with minimal changes can replace it, without any major changes.

    ACS is extremely beneficial if you are migrating to TAP for PCF/TAS environments, where SCCS is very commonly used, and having a native integration for the same features via ACS, allows for a much easier migration path for your applications.

    What does the UX look like

    The first thing one would do, is configure a CR of type "Configuration Source", pointing at the Git repository where your SCCS config resides. For example:

    apiVersion: "config.apps.tanzu.vmware.com/v1alpha4"
    kind: ConfigurationSource
    metadata:
      name: greeter-config-source
      namespace: my-apps
    spec:
      backends:
        - type: git
          uri: https://github.com/your-org/your-config-repo
    

    Once we have our source configured, we then can define a slice, where points at a specific config in that source repo. for example:

    apiVersion: config.apps.tanzu.vmware.com/v1alpha4
    kind: ConfigurationSlice
    metadata:
      name: greeter-config
      namespace: my-apps
    spec:
      configurationSource: greeter-config-source
      content:
      - greeter/dev
      configMapStrategy: applicationProperties
      interval: 10m
    

    As you can see, this is a very simple and clear UX. the spec of these resources can be much more complex, and these are just examples, but the idea is always the same.

    You can find the full documentation on this service here.

    Once we have configured the ACS resources, we can simply create a resource claim which will allow us to bind the config to our workload as such:

    apiVersion: services.apps.tanzu.vmware.com/v1alpha1
    kind: ResourceClaim
    metadata:
      name: greeter-config-claim
      namespace: my-apps
    spec:
      ref:
        apiVersion: config.apps.tanzu.vmware.com/v1alpha4
        kind: ConfigurationSlice
        name: greeter-config
    

    And now we can reference this resource claim in our workload as such:

    apiVersion: carto.run/v1alpha1
    kind: Workload
    metadata:
      name: greeter-messages
      namespace: my-apps
      labels:
        apps.tanzu.vmware.com/workload-type: web
        app.kubernetes.io/part-of: greeter
    spec:
      build:
        env:
        - name: BP_JVM_VERSION
          value: "17"
        - name: BP_GRADLE_BUILT_MODULE
          value: "greeter-messages"
      env:
      - name: SPRING_CONFIG_IMPORT
        value: "${SERVICE_BINDING_ROOT}/spring-properties/"
      serviceClaims:
      - name: spring-properties
        ref:
          apiVersion: services.apps.tanzu.vmware.com/v1alpha1
          kind: ResourceClaim
          name: greeter-config-claim
      source:
        git:
          url: https://github.com/spring-cloud-services-samples/greeting
          ref:
            branch: main
    

    Summary

    While this feature may not be a game changing feature for many, the ability it brings to easily migrate apps from TAS/PCF/ASA environments into TAP, is a huge accomplishment, and is a huge step in the direction of TAP becoming the de-facto standard for running Spring based apps at scale in containerized environments!

    I am truly excited to see these types of features being added in to TAP, making the migration story into TAP, that much smoother for brownfield environments.

  • TAP 1.5 – App SSO Enhancements

    TAP 1.5 – App SSO Enhancements

    App SSO Overview

    App SSO is one of the great features offered in TAP, which has received a bunch of love in TAP 1.5 App SSO provides the needed APIs for curating and consuming a "Single Sign-On as a service" offering on Tanzu Application Platform.

    Through App SSO, one can easily integrate TAP based workloads with SSO, in a secure, simple, and straightforward manner.

    What’s new in TAP 1.5?

    In TAP 1.5, App SSO has been enhanced with a few key new sets of functionality. The ones I am extremely excited about are:

    1. New AuthServer CORS API
    2. Role claim mapping from External IDP group membership
    3. New default auth scopes for users

    Let’s take a look at each of these features and what they bring to the table.

    Auth Server CORS API

    When working with SPAs and mobile apps with SSO, we need to configure CORS which is never a fun thing to deal with. Now, in TAP 1.5, we have a very simple and clean UX for defining CORS for Public Clients as part of the Auth Server CR.

    With this new API, we can simply enable web apps, that utilize the PKCE authentication flow.

    While TAP does support allowing all origins for CORS, it is not recommended to do so from a security perspective, and this should be done with great caution.

    Let’s see what the UX looks like. The first step one would do is define the Auth Server as they would previously, just now you would add the CORS configuration:

    kind: AuthServer
    # ...
    spec:
      cors:
        allowOrigins:
        - "https://vrabbi.cloud"
        - "https://*.vrabbi.cloud"
    

    As can be seen, both exact matches, and wildcards are supported in the allowed origins array.

    You could as mentioned allow all origins if needed as follows:

    kind: AuthServer
    metadata:
      annotations:
        sso.apps.tanzu.vmware.com/allow-unsafe-cors: ""
    spec:
      cors:
        allowAllOrigins: true
    

    As seen, we need to specify both in the config that we want all sources to be allowed, as well as with the allow-unsafe-cors annotation, as TAP does not want to prevent you from creating such a client, but wishes to deter you from doing so unless needed, as this is not a secure configuration and she be avoided when possible.

    Once you have defined a Public Client, and an Auth Server with CORS enabled, you must set the authentication mode to none when creating the client registration as can be seen bellow:

    kind: ClientRegistration
    spec:
      clientAuthenticationMethod: none
    

    This is needed as the Public Client flow is without authentication, and instead the PKCE flow will be utilized.

    Role claim mapping from External IDP group membership

    This new feature is another great enhancement, making App SSO much more user friendly.

    This feature allows us to easily map and filter groups a user is a part of that are returned from the upstream IDP at login, into a set of roles, under the roles claim in the provided JWT token to your apps, which is provided by App SSO, and the relevant Auth Server.

    This new feature supports 2 types of filters, for how to find the relevant groups in order to add the relevant roles. the 2 types are exactMatch and regex.

    These 2 methods, enable just enough flexibility, while still keeping the API simple in my mind, and allow for us to map credentials from our upstream IDP, into our downstream apps, in a simple manner.

    This feature is configured at the Auth Server level, per IDP. For example:

    spec:
      identityProviders:
        - name: my-ldap
          ldap:
            roles:
              filterBy:
                - exactMatch: "admin-users"
                - regex: "^users-*"
        - name: my-oidc
          openid:
            roles:
              filterBy:
                - exactMatch: "admin-users"
                - regex: "^users-*"
        - name: my-saml
          saml:
            roles:
              filterBy:
                - exactMatch: "admin-users"
                - regex: "^users-*"
    

    As can be seen, this is supported for OIDC, LDAP and SAML IDPs, making the configuration really simple and easy to integrate, no matter your setup.

    Default Auth Scopes

    This is a long requested feature, that is truly awesome to see being added. With this feature, you can now define authorization scopes that are automatically granted to all users from a specific IDP, regardless of their user role.

    For example, given an AuthServer with an OIDC IDP configured, with defined authorization scope defaults:

    kind: AuthServer
    spec:
        identityProviders:
        - name: my-oidc
          openid:
            accessToken:
                scope:
                    defaults:
                    - "user.read"
                    - "user.write"
    

    With the above config, a client registration can be created, requesting the scopes:

    kind: ClientRegistration
    spec:
        scopes:
        - name: "roles"
        - name: "user.read"
    

    Now that the client registration is added, when a Workload is registered by using the ClientRegistration, that workload, on behalf of the user, can request and be granted with the scope user.read automatically within the issued access token, without relation to the group memberships of that user.

    This allows for some pretty powerful use cases, and i am excited to see where people take this.

    Spring Cloud Gateway Integration

    Another great feature, which is not new in TAP 1.5, but worth noting, is the simple integration one can do between Spring Cloud Gateway (SCG) and App SSO. The reason this is worth mentioning, is that in TAP 1.5, SCG is now included as an optional package, making the possibility of utilizing SCG much greater for a wider set of customers.

    This is an area i expect to see grow over the next few releases, and it will be interesting to see how customers end up implementing these integrations and configurations in real world scenarios.

    Summary

    As you can tell, VMware have put a lot of effort into App SSO in this release, adding in key features which simplify the UX, and make the amount of customization needed in your own app to include SSO that much smaller, and easier to cope with, bringing us all a step closer to a more secure landscape, and better protected applications, which is always a huge benefit, especially when provided by a platform, with almost no overhead to manage!

  • TAP 1.5 – Spring Cloud Gateway

    TAP 1.5 – Spring Cloud Gateway

    In TAP 1.5, an amazing addition was made, with the inclusion of an additional package "Spring Cloud Gateway For Kubernetes"! This is a VMware product which was initially released in February 2021 as a standalone product, and is now included as part of TAP!

    Spring Cloud Gateway for Kubernetes is based on the open source Spring Cloud Gateway project.

    What Is Spring Cloud Gateway For Kubernetes

    Spring Cloud Gateway for Kubernetes is based on the open source Spring Cloud Gateway project, and is targetted to be the API gateway solution that application developers love, and IT Operators are so happy that it exists! Spring Cloud Gateway for Kubernetes handles cross-cutting concerns on behalf of development teams, such as:

    • Single Sign-On (SSO)
    • access control
    • rate limiting
    • resiliency
    • security SCG helps to accelerate API delivery using modern cloud native patterns, using any programming language you choose, and integration with your existing CI/CD pipeline strategy.

    What Does SCG For Kubernetes Add On Top Of The Open Source Project

    Beyond the upstream SCG capabilities, the SCG for Kubernetes offering, enhances the offering by integrating with other Spring ecosystem projects such as Spring Security and Spring Session.

    Beyond these key additions, we also get additional features, only available in the commercial offering which include:

    • Kubernetes native integration, using a set of CRDs, managed by the SCG for Kubernetes operator
    • Dynamic API route configuration
    • Support for API updates through existing CI/CD pipelines
    • Simple SSO configuration
    • Commercial API route filters to enable authentication and access control
    • OpenAPI v3 auto-generated documentation
    • Horizontal and Vertical scaling configuration, to reach High Availability and meet performance requirements

    What Does the UX look like

    The first step, after installing SCG from the TAP package repository, is to create a gateway instance, which can be as simple as:

    apiVersion: tanzu.vmware.com/v1
    kind: SpringCloudGateway
    metadata:
      name: my-api-gateway
    spec:
      count: 1
    

    While this resource seems very simple, we can actually add a lot of logic here as well at the gateway level. a more advance setup may look more like this:

    apiVersion: "tanzu.vmware.com/v1"
    kind: SpringCloudGateway
    metadata:
      name: my-advanced-api-gateway
      namespace: platform-ops-system
    spec:
      api:
        serverUrl: https://my-advanced-api-gateway.example.com
        title: my advanced api gateway
        description: Micro Gateway to control internal APIs of my app
        version: 0.1.0
        cors:
          allowedOrigins:
            - api-portal.example.com
      count: 3
      sso:
        secret: sso-secret
      extensions:
        secretsProviders:
          - name: vault-jwt-keys
            vault:
              roleName: scg-role
        filter:
          jwtKey:
            enabled: true
            secretsProviderName: vault-jwt-keys   
      env:
        - name: spring.cloud.gateway.httpclient.connect-timeout
          value: "90s"
      healthCheck:
        enabled: true
        interval: 30s
      observability:
        tracing:
          wavefront:
            enabled: true
        metrics:
          wavefront:
            enabled: true
        wavefront:
          secret: wavefront-secret
          source: my-advanced-api-gateway
          application: my-app
          service: my-advanced-api-gateway
          prometheus:
            enabled: true
            serviceMonitor:
              enabled: true
              labels:
                release: my-advanced-api-prometheus"
    

    As you can see, we can add a lot of logic in a gateway, and by abstracting it away from our applications, this makes everyone’s lives, so much easier!

    Once we have a Gateway, we now need to define our routes. This is done in a separate CR called "SpringCloudGatewayRouteConfig". Lets see an example of what this may look like:

    apiVersion: "tanzu.vmware.com/v1"
    kind: SpringCloudGatewayRouteConfig
    metadata:
      name: suppliers-routes-config
    spec:
      service: 
        name: suppliers-api
      routes:
        - predicates:
            - Path=/list-outgoing-payments/
            - Method=GET
          filters:
            - RateLimit=2,10s
            - CircuitBreaker="suppliersCircuitBreaker,forward:/alternative-payments-service"
        - predicates:
            - Path=/process-payment/*/supplier/**
            - Method=POST,PUT,DELETE
          ssoEnabled: true
    

    As you can see, we can easily define, based on many characteristics of a request, how it should be handled.

    The upside of splitting the route configuration from the gateway is huge, because the configuration at the gateway level, are the platform and IT teams concerns, while the routes themselves, are what developers own. by splitting these concerns into separate APIs, we give a much cleaner boundry and seperation of concerns between the relevant teams in the organization.

    The final resource we have is the glue that stiches this all together which is the 3rd and final CR added as part of SCG which is called "SpringCloudGatewayMapping".

    This resource simply binds a route configuration to a specific gateway.

    This would for example look like:

    apiVersion: "tanzu.vmware.com/v1"
    kind: SpringCloudGatewayMapping
    metadata:
      name: suppliers-routes-mapping
    spec:
      gatewayRef:
        name: my-advanced-api-gateway
        namespace: platform-ops-system
      routeConfigRef:
        name: suppliers-routes-config
    

    The power we get with SCG for kubernetes is amazing, and having it in a kubernetes native implementation, allows us to manage our API Gateway configuration using industry standards such as GitOps. We also, because this is being defined as a kubernetes resource, can apply any policy tooling to enforce certain organizational requirements. for Example we could create OPA based policies using Tanzu Mission control, that dont allow any routes without SSO enabled, or we could require cirtuit breaker configs to exist for every route.

    Summary

    Having SCG for Kubernetes now at our fingertips when using TAP unlocks a wide range of advanced use cases, making the possibilities endless.

    SCG also integrate with App Live View, allowing us to easily visualize the status of our metrics directly in TAP GUI alongside our applications themselves!

    I am truly excited to see how organizations adopt this technology in their own paths to production, and make everyone’s lives easier, while at the same time, increasing security, application resilience, and visibility!

  • TAP 1.5 – Azure DevOps Is Now Supported

    TAP 1.5 – Azure DevOps Is Now Supported

    Background

    Back in December 2022, While working on designing a TAP implementation for a customer of mine, I came to learn that they use Azure DevOps as their Git server.

    Immediately i decided to take a look at how it would work with TAP, and ran into a bunch of issues.

    Azure DevOps is not a standard Git server, and many limitations and differences are present that make the integration difficult.

    I ended up diving deep into the weeds, and was able to make the integration work, through some custom overlays.

    This solved my issue, but was not an officially supported configuration, and as such I shared with people on the TAP team my findings, and my implementation of a POC on building in such an integration. I also wrote a blog post detailing the steps needed to get this working.

    TAP 1.5 – Official Support for Azure DevOps

    Now in TAP 1.5, VMware have built in support for Azure DevOps out of the box which is amazing!

    Many enterprise customers use Azure DevOps, and integrating with it out of the box, makes the barrier of entry to TAP much lower for many customers.

    What you need to configure to work with Azure DevOps

    As mentioned above, Azure DevOps is not a standard Git implementation and has some hard restrictions as well as some quirks.

    The main 2 differences are:

    1. Azure DevOps requires Multi Ack authentication from the git client
    2. Azure DevOps has non standard paths for repositories as compared to other git providers

    In order to handle these cases, we need to make a few changes in our TAP Values, as well as some changes in our Workload and Deliverable YAML files.

    TAP Values level changes

    We have 2 main integrations with Git that need configuration in our supply chain.

    The first type of integration, is how the platform pulls down source code from git, which is handled by our supply chains.

    The second type of integration, is the GitOps flow, in which we push the generated kubernetes manifests into a git repository either via a direct commit to a specified branch or via opening a pull request.

    Configuring the supply chain to pull from Azure DevOps

    In our TAP values file we need to set the git_implementation key to libgit2 instead of the default which is go-git. This is because of the requirment for multi ack support, which is not provided by go-git.

    This key is a parameter of the supply chain specific settings in your TAP values, so depending on the out of the box supply chain you choose to use, it will like like one of the following:

    ootb_supply_chain_basic:
      git_implementation: libgit2
    
    ootb_supply_chain_testing:
      git_implementation: libgit2
    
    ootb_supply_chain_testing_scanning:
      git_implementation: libgit2
    

    While doing this setting globally is the better UX, and will prevent many issues, this could also be set on a workload by workload basis. This may be beneficial if you have multiple teams using the platform and some use Azure DevOps, and others use a different Git provider such as Github. This can be easily configured on a workload by adding the parameter "gitImplementation" like bellow:

    apiVersion: carto.run/v1alpha1
    kind: Workload
    metadata:
      ...
    spec:
      params:
        - name: gitImplementation
          value: libgit2
    

    Configuring the supply chain to push to an Azure DevOps GitOps repository

    When we configure the GitOps flow, we add under our top level key of the relevant supply chain a gitops section with the needed values which differ depending on if you want to use the direct commit approach or the PR approach, however in either case, the Azure DevOps related changes are the same.

    under the gitops key, we first have 3 important keys:

    1. server_address – this is the URL to your Git server
    2. repository_owner – this is typically the github/gitlab organization or user
    3. repository_name – the name of the repo we want to push the config to.

    While the hierarchy of most Git Providers is <ORG NAME>/<REPONAME>, in Azure DevOps, we have a middle level object called a project.

    This means that when a typical Gtihub URL to a repo would look like:

    https://github.com/vrabbi/tap-gitops
    

    In Azure DevOps it will look a bit different. A project in Azure DevOps does not only contain a git repo but also many other functionalities and as such the URLs are built in the following format:

    https://<SERVER FQDN>/<ORG NAME>/<PROJECT NAME>/_git/<REPO NAME>
    

    For example:

    https://dev.azure.com/vrabbi/vrabbi-gitops/_git/vrabbi-gitops
    

    If you notice in the example above, the project name and repo name are the same. This is typically the case, however you can actually have multiple repositories under a single project.

    Due to this structure of the URL, we need to fill out the values mentioned above in a specific manner:

    1. server_address – this is the same as other providers (eg https://dev.azure.com)
    2. repository_owner – this must be <ORG>/<PROJECT> (eg vrabbi/vrabbi-gitops)
    3. repository_name – this is the same as other providers (eg tap-gitops)

    The final setting we must configure, whether using the PR flow or the direct commit flow is, gitops.pull_request.server_kind which must be set to the value "azure".

    In the end a sample configuration with the commit flow would look like:

    otb_supply_chain_basic:
      gitops:
        server_address: https://dev.azure.com
        repository_owner: vrabbi/tap-gitops
        repository_name: tap-gitops
        pull-request:
          server-kind: azure
    

    And a sample with the PR flow would look like:

    otb_supply_chain_basic:
      gitops:
        server_address: https://dev.azure.com
        repository_owner: vrabbi/tap-gitops
        repository_name: tap-gitops
        commit_strategy: pull_request
        branch: main
        pull-request:
          server-kind: azure
          commit_branch: ""
          pull_request_title: ready for review
          pull_request_body: generated by supply chain
    

    Changes to our workload and deliverable manifests

    The only real change here, is to pay attention to the Git URL and make sure you use the correct Azure DevOps format which is as mentioned above:

    https://<SERVER FQDN>/<ORG NAME>/<PROJECT NAME>/_git/<REPO NAME>
    

    While this seems trivial, it is important to not add the ".git" suffix to the name of the repo, as this is unsupported by Azure DevOps, and will cause your supply chains to fail to pull the source code.

    Summary

    It is great to see that VMware have now added official support for Azure DevOps to TAP. The integration is extremely easy to setup, and the changes mentioned above should be quite well known for those that deal with Azure DevOps.

    The ability to support so many different customer setups is an amazing part of TAP, due to its extreme flexibility, and the ease of customization of the platform, and having another key Git providers implementation officially supported is a great proof of this.

  • TAP 1.5 – Application Accelerator Updates

    TAP 1.5 – Application Accelerator Updates

    Application Accelerator Overview

    App Accelerator helps you bootstrap developing your applications and deploying them in a discoverable and repeatable way.

    You can think of app accelerator as a way to publish a software catalog to your end users, which will allow them to fill out a simple and customizable form, that will then generate a templated set of files which they can start iterating on for there new project.

    Enterprise Architects author and publish accelerator projects that provide developers and operators in their organization ready-made, enterprise-conformant code and configurations.

    App Accelerator, is based on the Backstage Software Template plugin, and enriches it with additional capabilities and features to make it extremely easy to use.

    What Is New In TAP 1.5

    With every release of TAP, we get some new and exciting features in App Accelerator, and this is very much the case in TAP 1.5!

    There are 3 key new features in App Accelerator:

    1. IntelliJ Plugin for App Accelerator
    2. Accelerator Generated Projects Provenance
    3. Global activation or deactivation of Git repo creation flow

    Lets take a look at each of these features and what they bring to the table.

    IntelliJ Plugin for App Accelerator

    Just like we have had for a few releases in VSCode, we can now generate projects from accelerators, directly from Intellij!!!!

    The flow is really great, as they have embedded the accelerator plugin in a way that makes it a seamless experience, just like creating a project using any other plugin in IntelliJ.

    Let’s see what the flow looks like:

    First, the developer opens IntelliJ and would select to create a new project: Alt

    Next, the developer select Application Accelerator on the left hand panel, and would select the relevant accelerator they want to use: Alt

    Next they would fill out the form of that accelerator: Alt

    Next they would review the inputs, and generate the project: Alt

    Finally, they would press create, and the new project will be opened for them automatically in IntelliJ! Alt

    As you can see, the flow is extremely intuitive, and flows very well with the current developer flow that they have, with all the additional benefits that application accelerators bring to the table included!

    Accelerator Generated Projects Provenance

    Provenance and attestation, are the new buzz words, and for good reason. With supply chain attacks happening day after day, we need a way of stating, where something was generated, how it was generated, by whom at is what generated, with what configuration settings etc.

    Now in TAP 1.5, we are getting a new feature in Application Accelerators, which is enabled in all of the OOTB accelerators, and is easily implementable in all of your own custom accelerators, which will provide provenance for a project generated from an accelerator.

    When you generate a project using an accelerator a new file accelerator-info.yaml is generated as part of the generated zip. If you choose to provision a git repo then this file is pushed along with other files in the generated zip.

    Lets see an example file that is generated, and what it contains:

    id: a572d705-1bb4-49ca-9164-2f8918548d01
    timestamp: 2023-03-28T15:03:04Z
    username: ScottRosenberg
    source: VSCODE
    accelerator:
      name: tanzu-java-web-app
      source:
        image: harbor.vrabbi.cloud/tap/tap-packages@sha256:28e34f0cbf0de2c6f317ea870caf21b79aa52a5359c28666a6ceec90184eb409
    fragments:
      - name: build-wrapper-maven
        source:
          image: harbor.vrabbi.cloud/tap/tap-packages@sha256:195a3ca6585fa91c41292584a19c2807c72ecdf986ce860a7147451e89d467d4
      - name: java-version
        source:
          image: harbor.vrabbi.cloud/tap/tap-packages@sha256:fa976ccf1609cb69e74a0162f0f49581fd0d393003e2fbe5d54d12eae62b4ff9
      - name: tap-workload
        source:
          image: harbor.vrabbi.cloud/tap/tap-packages@sha256:dbf0dedb6848ad8a7704c1c19465a1ddae9039b0e63c1dd0df3e2ed9cbda6093
    options:
      includeBuildToolWrapper: true
      javaVersion: 11
      projectName: tanzu-java-web-app
      repositoryPrefix: harbor.vrabbi.cloud/library/sample
      updateBoot3: false
    

    Here we can see, that the project was generated via the VSCode plugin, by Scott Rosenberg, o0n March 28th 2023, using the Tanzu Java Web App accelerator which is coming from an imgpkg bundle with the exact SHA noted, which also invoked specific fragments which all are also defined in this case as imgpkg bundles, again with the exact SHA mentioned, and we also have the exact inputs I entered, available at the bottom of the file.

    This type of provenance is extremely powerful, and I am really excited to see where this goes in future releases!

    Global activation or deactivation of Git repo creation flow

    One of the great features of Application Accelerator, is the ability to auto create a Git repository when i generate a new project.

    While this is a great feature, some organizations do not want to expose this option. In cases where this functionality is not wanted, there is now an easy configuration that one can add to there TAP Values, under the tap_gui section which will disable this feature globally. This means it will disable the feature from both the IDE plugins as well as from TAP GUI itself.

    To deactivate this feature, one simply needs to add the following YAML snippet to their TAP values file:

    tap_gui:
      # Existing configuration .......
      app_config:
        # Existing configuration .......
        customize:
          # Existing configuration .......
          features:
            accelerators:
              gitRepoCreation: false
    

    This will disable this feature globally, and align the platform to your environments needs, in a simple and easy way!

    Summary

    As you can see, VMware are putting a lot of effort into improving the Developer experience, and making the platform teams happy as well. These updates of Application Accelerator, are truly awesome to see, and I am even more excited to see what will become of these features in the future!