Blog https://blog.lawrencemcdaniel.com/ Thu, 14 Apr 2022 17:13:00 +0000 en-US hourly 1 https://wordpress.org/?v=6.1.1 Managing Your Open edX Backend With Terraform https://blog.lawrencemcdaniel.com/managing-your-open-edx-backend-with-terraform/?utm_source=rss&utm_medium=rss&utm_campaign=managing-your-open-edx-backend-with-terraform https://blog.lawrencemcdaniel.com/managing-your-open-edx-backend-with-terraform/#respond Wed, 13 Apr 2022 02:21:25 +0000 https://blog.lawrencemcdaniel.com/?p=5469 Scaling an Open edX platform can become unwieldy. Let’s take a look at how Terraform can help you maintain control of everything inside your AWS account Note: get the source code for this article at https://github.com/lpm0073/cookiecutter-openedx-devops. Follow the instructions in the README. Open edX is a beast! How

The post Managing Your Open edX Backend With Terraform appeared first on Blog.

]]>

Scaling an Open edX platform can become unwieldy. Let’s take a look at how Terraform can help you maintain control of everything inside your AWS account

Note: get the source code for this article at https://github.com/lpm0073/cookiecutter-openedx-devops. Follow the instructions in the README.

Open edX is a beast! How do you tame it?

In all fairness, that question is prone to coming up for any successful, modern web platform that goes through a growth spurt. In this article we’ll explore how I manage not just one, but several very large Open edX installations. Here are what I consider to be the key success factors:

  • Infrastructure as code. I use Terraform, but there are other good alternatives. Terraforms gives me the ability to version control my backend infrastructure service configurations so that I can safely fallback when I make a mistakes, and it gives me complete automation of the entire life cycle of each service which saves me lots of time.
  • Dedicated VPC. I use a dedicated VPC for each Open edX installation, which helps to optimize the network for each installation as well as to keep systems from bleeding into each other, and, it also helps with tear-downs.
  • Managed Services. All of my Open edX platforms run on AWS, and I’m biased towards using their managed services such as RDS for MySQL, DocumentDB for MongoDB, EKS for Kubernetes, and Elasticache for Redis. This dramatically reduces the number of failure points for which you are directly responsible.
  • Kubernetes. Paradoxically, adding Kubernetes simplifies most aspects of system management
  • Simple security policies. We’ll talk more below about firewall settings, user accounts, admin accounts, and exposing your backend services to the outside world.

Earlier this year I open-sourced my personal Terraform and Github Actions scripts in the form of a Cookiecutter template repository named Cookiecutter Openedx Devops. You can use this Cookiecutter to create your own Open edX devops repository, perfectly configured with your custom domain name and AWS account information. Cookiecutter Open edX Devops is a highly opinionated set of tools for creating and maintaining an AWS backend for Open edX that satisfies all five of these principals.

The Terraform modules of Cookiecutter Open edX Devops

Cookiecutter Open edX Devops leverages Terraform and Github Actions to provide 1-click backend solutions incorporating the current best practices for each service with regard to feature set, configuration, maintainability and security. This is mostly achieved by restricting the Terraform modules that it leverages to those supported by Hashicorp directly and of Terraform AWS modules, which is a community of AWS service users spanning dozens of large organizations and thousands of individual contributors. For each backend service it:

  • creates and configures the service
  • stores admin account credentials in Kubernetes Secrets
  • creates security groups, IAM policies and anything else that necessary for the service to work correctly with the Open edX applications
  • creates Route53 DNS subdomain records
  • reconfigures the Open edX applications to use the new remote service

Fully integrated backend

Individual services

  • Kubernetes. Uses AWS Elastic Kubernetes Service to implement a Kubernetes cluster onto which all applications and scheduled jobs are deployed as pods. Tutor natively deploys Open edX applications as individual containers for LMS, CMS, Workers, Forum, etcetera. All backend service admin account credentials are automatically stored in Kubernetes Secrets. The Kubernetes configuration itself is intentionally as simple as possible. Simple is good.
  • MySQL. uses AWS RDS for all MySQL data, accessible inside the VPC as mysql.yourdomain.edu:3306. Instance size settings are located in the environment configuration file, and other common configuration settings are located here. Passwords are stored in Kubernetes Secrets accessible from the EKS cluster.
  • MongoDB. uses AWS DocumentDB for all MongoDB data, accessible inside the VPC as mongodb.master.yourdomain.edu:27017 and mongodb.reader.yourdomain.edu. Instance size settings are located in the environment configuration file, and other common configuration settings are located here. Passwords are stored in Kubernetes Secrets accessible from the EKS cluster.
  • Redis. uses AWS ElastiCache for all Django application caches, accessible inside the VPC as cache.yourdomain.edu. Instance size settings are located in the environment configuration file. This is necessary in order to make the Open edX application layer completely ephemeral. Most importantly, user’s login session tokens are persisted in Redis and so these need to be accessible to all app containers from a single Redis cache. Common configuration settings are located here. Passwords are stored in Kubernetes Secrets accessible from the EKS cluster.
  • Container Registry. uses this automated Github Actions workflow to build your tutor Open edX container and then register it in Amazon Elastic Container Registry (Amazon ECR). Uses this automated Github Actions workflow to deploy your container to AWS Amazon Elastic Kubernetes Service (EKS). EKS worker instance size settings are located in the environment configuration file. Note that tutor provides out-of-the-box support for Kubernetes. Terraform leverages Elastic Kubernetes Service to create a Kubernetes cluster onto which all services are deployed. Common configuration settings are located here.
  • User Data. uses AWS S3 for storage of user data. This installation makes use of a Tutor plugin to offload object storage from the Ubuntu file system to AWS S3. It creates a public read-only bucket with write access provided to edxapp so that app-generated static content like user profile images, xblock-generated file content, application badges, e-commerce pdf receipts, instructor grades downloads and so on will be saved to this bucket. This is not only a necessary step for making your application layer ephemeral but it also facilitates the implementation of a CDN (which Terraform implements for you). Terraform additionally implements a completely separate, more secure S3 bucket for archiving your daily data backups of MySQL and MongoDB. Common configuration settings are located here.
  • CDN. uses AWS Cloudfront as a CDN, publicly accessible as https://cdn.yourdomain.edu. Terraform creates Cloudfront distributions for each of your environments. These are linked to the respective public-facing S3 Bucket for each environment, and the requisite SSL/TLS ACM-issued certificate is linked. Terraform also automatically creates all Route53 DNS records of form cdn.yourdomain.edu. Common configuration settings are located here.
  • Password & Secrets Management uses Kubernetes Secrets in the EKS cluster. Open edX software relies on many passwords and keys, collectively referred to in this documentation simply as, “secrets“. For all back services, including all Open edX applications, system account and root passwords are randomly and strongly generated during automated deployment and then archived in EKS’ secrets repository. This methodology facilitates routine updates to all of your passwords and other secrets, which is good practice these days. Common configuration settings are located here.
  • SSL Certs. Uses AWS Certificate Manager and LetsEncrypt. A Kubernetes service manages all SSL/TLS certificates and renewal requests. It uses a combination of AWS Certificate Manager (ACM) as well as LetsEncrypt. Additionally, the ACM certificates are stored in two locations: your aws-region as well as in us-east-1 (as is required by AWS CloudFront). Common configuration settings are located here.
  • DNS Management uses AWS Route53 hosted zones for DNS management. Terraform expects to find your root domain already present in Route53 as a hosted zone. It will automatically create additional hosted zones, one per environment for production, dev, test and so on. It automatically adds NS records to your root domain hosted zone as necessary to link the zones together. Configuration data exists within several modules but the highest-level settings are located here.
  • System Access uses AWS Identity and Access Management (IAM) to manage all system users and roles. Terraform will create several user accounts with custom roles, one or more per service.
  • Network Design. uses Amazon Virtual Private Cloud (Amazon VPC) based on the AWS account number provided in the global configuration file to take a top-down approach to compartmentalize all cloud resources and to customize the operating environment for your Open edX resources. Terraform will create a new virtual private cloud into which all resource will be provisioned. It creates a sensible arrangement of private and public subnets, network security settings and security groups. See additional VPC documentation here.
  • Proxy Access to Backend Services. uses an Amazon EC2 t2.micro Ubuntu instance publicly accessible via ssh as bastion.yourdomain.edu:22 using the ssh key specified in the global configuration file. For security as well as performance reasons all backend services like MySQL, Mongo, Redis and the Kubernetes cluster are deployed into their own private subnets, meaning that none of these are publicly accessible. See additional Bastion documentation here. Terraform creates a t2.micro EC2 instance to which you can connect via ssh. In turn you can connect to services like MySQL via the bastion. Common configuration settings are located here. Note that if you are cost conscious then you could alternatively use AWS Cloud9 to gain access to all backend services.

Getting started with Terraform

Quick Start

The Cookiecutter uses Terragrunt to call Terraform. You’ll need to install the following in your local development environment: Terraform CLITerragrunt, and AWS CLI. When configuring the AWS CLI keep in mind that you’ll need to provide an IAM key/secret for an IAM user with permission to create whatever resources are referenced in the Terraform module that you execute. This Terraform Gettting Started guide is a good starting point.

What it is and how it works

Terraform is an open-source infrastructure as code software tool created by HashiCorp. Users define and provide data center infrastructure using a declarative configuration language known as HashiCorp Configuration Language (hcl). One of the important benefits of Terraform is the community-supported Terraform modules registered on the Terraform Registry. Cookiecutter Open edX Devops uses carefully vetted modules to implement all of the Open edX backend services like EKS, RDS, etcetera. This helps to ensure that each of these services conforms to current best practices with regard to feature set, maintainability and security considerations.

Terraform is a simple language that you can probably learn in a couple of hours or less. The challenges with Terraform are less about the language itself but rather, minutia regarding configuration of the backend services that Terraform is managing for you. For example, if you aspire to use Terraform to manage Open edX’s MySQL service remotely via RDS then in addition to the Terraform module that does this you also need a decent understanding of  MySQL database administration, AWS itself, RDS, IAM, VPC, how policies work, at least the basics about TCP/IP networks, security groups, and so on. Ditto with regard to every other service that Terraform can manage for you.

The Terraform interpreter compares your code to your AWS account, calculates the difference between one and the other, and then formulates a plan to make your AWS account match your code. It generally works well, but it’s not perfect. Terraform can easily become confused for example, if you ever manually tinker with your AWS resources outside of Terraform. Simply put, there’s a learning curve. And the best way to learn is to use Terraform.

In this video I’ll navigate to the Terragrunt template for the VPC, initialize the environment, and create a plan. In this case, the VPC in my AWS account matches that of my source code, so the plan results in no actions.

The occasional Terraform hiccup aside, it’s a valuable tool. It certainly beats trying to build an entire AWS backend by hand.

Running Terraform modules

It’s important to remember that Terragrunt calls Terraform. You invoke modules from inside the Terragrunt modules folder which after having run the Cookiecutter is located in ./terraform/environments/prod/ folder. Each Terragrunt module invokes a corresponding Terraform module located in ./terraform/modules/. Command forms are pretty simple:

# -------------------------------------
# to manage an individual resource
# -------------------------------------
cd ./terraform/environments/prod/vpc
terragrunt init
terragrunt plan
terragrunt apply
terragrunt destroy

# -------------------------------------
# Or, to build the entire backend
# -------------------------------------
cd ./terraform/environments/prod/
terragrunt run-all apply

Learn Terragrunt and Terraform in 10 minutes

Terragrunt is a templating language that enables you to parameterize Terraform modules in order to make them more re-usable. The most command use case is to manage environments for say, dev, test, and prod. The Cookiecutter creates one environment named “prod” which you can copy in order to create additional environments if you’d like.

Terragrunt looks really similar to Terraform. In fact, they’re almost identical. The recurring coding pattern that you’ll see in each Terragrunt module is

  • locals: variable declarations that are local to the module. they are then addressable as, “local.variable_name”
  • dependency: Terragrunt modules for which the module is dependent. This determines the execution order when using “run-all”
  • terraform: a pointer to the Terraform module being templated
  • inputs: variable declarations that are passed to the Terraform module. These are then addressable within the Terraform module as, “var.variable_name”

See these abbreviated snippets from the VPC module for example

Example Terraform module

In this Cookiecutter Terraform modules are called exclusively from Terragrunt. You never call a Terraform module directly. The following Terraform module is an abbreviated copy of the module called by the Terragrunt module above. I should point out the following:

  • module: declares a Terraform module which can contain a collection of other Terraform directives to create and relate resources.
  • source: indicates where the source code for this module is located. in this case we’re using shorthand that refers to a registered module in the Terraform registry located at https://registry.terraform.io/modules/terraform-aws-modules/eks/aws/latest. Complete documentation for this module is located on this same web page.
  • version: optional semantical version constraint notation for the module, assuming that it is located in a repository like Github that understands semantic versioning. Also you should note that the source code itself is actually located here: https://github.com/terraform-aws-modules/terraform-aws-eks, which I was able to determine by reading the Terraform Registry page for this module.
  • cluster_name: an example of an input for this module. You’ll find a directory of all required and optional inputs on the Registry web page for the module. Also note that this input references a variable that was declared in the calling Terragrunt module. I know this because it is prefixed with “var.”
  • node_security_group_additional_rules: an example of an input in the form of a Terraform dict, which basically follows JSON syntax.

Other Terraform language concepts to be aware of

  • provider: AWS, Azure, Google, IBM, Alibaba, and others.
  • locals: an area where you can declare variables that are local to the Terraform module.
  • resource: a provider resource whose lifecycle will be managed by Terraform.
  • data: references to resources that were created outside of the Terraform module.
  • variable: for parameterizing Terraform modules. For every Terragrunt “input” we need to declare a corresponding Terraform variable.
  • output: for exposing module output values. Cookiecutter Open edX Devops does not use outputs.

Following are examples of a data item and a resource declaration. A data reference addresses an existing AWS item whereas a resource declaration refers to an AWS item that will be created and whose lifecycle will be controlled by Terraform.

Of note regarding the data declaration, data “aws_s3_bucket” “environment_domain”

  • aws_s3_bucket: refers to a type of resource that is understood by the AWS Provider which was declared elsewhere in the code. Documentation for this data declaration is located at: https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/s3_bucket
  • a Terraform data declaration gives you a way to retrieve attribute information about a resource, but it does not give you any ability to administer the resource itself. So in this case for example, we can use the data declaration to retrieve the ARN of the bucket, as data.aws_s3_bucket.environment_domain.id
  • The AWS S3 bucket will be referred to in this source as, “data.aws_s3_bucket.environment_domain”, noting that the string value “environment_domain” is user defined.
  • The data declaration is resolved as the AWS S3 bucket with the bucket name equating to the locally-declared Terraform variable, local.s3_bucket_name

Of note regarding the resource declaration, resource “aws_route53_record” “cdn_environment_domain”

This next code snippet is a recurring pattern throughout the Terraform modules. These three declarations initialize a Terraform provider referencing the Kubernetes cluster. The certificate value is encrypted which is why you see a call to base64decode(). Also note the usage of dot notation. Complete documentation for each of these three declarations is available at:

Keeping your Terraform code updated

It’s important to keep in mind that Terraform code is modular and that the modules themselves have their own semantic versions, which results in a lot of version tracking on your part. This is a big part of what Cookiecutter Open edX Devops does for you by the way. All told there are upwards of a couple of dozen versions of Terraform modules in addition to Terraform itself, and the things with which Terraform interacts like the aws cli.

Bug fixes and security patches aside, Terraform is a rapidly evolving technology that incidentally sits atop lots of stacks of other technology that is sometimes also rapidly evolving. The Kubernetes project for example continues to add new features, many of which are adopted by AWS who in turn updates their CLI, resulting in the Terraform community-supported EKS module to update their module. So essentially, a change of any kind in your backend tends to result in updates to some Terraform module. To mitigate this you should occasionally re-run Cookiecutter Open edX  against your repository.

Good luck on next steps with adding a Kubernetes cluster to your Open edX installation!! I hope you found this helpful. Contributors are welcome. My contact information is on my web site. Please help me improve this article by leaving a comment below. Thank you!

The post Managing Your Open edX Backend With Terraform appeared first on Blog.

]]>
https://blog.lawrencemcdaniel.com/managing-your-open-edx-backend-with-terraform/feed/ 0
Running Open edX At Scale With Kubernetes https://blog.lawrencemcdaniel.com/running-open-edx-on-kubernetes/?utm_source=rss&utm_medium=rss&utm_campaign=running-open-edx-on-kubernetes https://blog.lawrencemcdaniel.com/running-open-edx-on-kubernetes/#respond Tue, 12 Apr 2022 02:14:50 +0000 https://blog.lawrencemcdaniel.com/?p=5370 The rapid adoption of Tutor as a build and deployment solution for Open edX has also opened the door to using Kubernetes. But what is Kubernetes and what problem does it solve for an Open edX platform? Well, plenty! Read on to see what, why, and how you can begin leveraging this next-generation system

The post Running Open edX At Scale With Kubernetes appeared first on Blog.

]]>

The rapid adoption of Tutor as a build and deployment solution for Open edX has also opened the door to using Kubernetes. But what is Kubernetes and what problem does it solve for an Open edX platform? Well, plenty! Read on to see what, why, and how you can begin leveraging this next-generation system management technology.

What are Kubernetes, anyway?

The name Kubernetes originates from Greek, meaning helmsman or pilot. K8s as an abbreviation results from counting the eight letters between the “K” and the “s”. Open-sourced in 2014, the Kubernetes project combines over 15 years of Google’s experience running production workloads at scale with best-of-breed ideas and practices from the community. But in hard concrete terms, what problems does it solve for an Open edX installation, and why would you want to use it?

“Kubernetes is a portable, extensible, open source platform for managing containerized workloads and services.”, so says the Kubernetes web site. Let’s unpack that statement in strict, Open edX terms, beginning with the part about, “containerized workloads and services“.

First of all,  what containers? Where did they come from? What do containers have to do with Open edX software? Well, it turns out that Tutor converts the traditional monolithic Open edX platform into a collection of containers, and so if you’re installing with Tutor then that part has already been taken care of. Not only that, Tutor provides native support for Kubernetes, so you really don’t have to do much of anything other than decide that you want to deploy to a Kubernetes cluster.

Thanks Régis!

Let’s defer definitions of “portable and extensible” for the moment, and instead I’ll add my own thoughts. Kubernetes orchestrates the collection of containers that Tutor creates at deployment. That is, if you create a Kubernetes cluster consisting of more than one virtual Linux server instance then Kubernetes takes responsibility for deciding on which server to place each container.

But, how many containers does Tutor create, and even if the answer is “a bunch” is it really necessary to introduce industrial grade systems like Kubernetes just to spread out the workloads across the servers? I mean, why can’t you just break up the application stack yourself, putting the LMS on one server, the CMS on another, MySQL on another, and so on? Well, you can. And I have. And I’ve written copious articles on how to do exactly that. Kubernetes is simply a more robust way for you to automatically and more efficiently allocate your workloads (LMS, CMS, Forum, etc) across your server instances.

Let’s look at a practical example. The screen shot below is a Kubernetes cluster managing multiple Open edX environments for dev, test, staging, and prod; all for the same client. Each environment is grouped into “namespaces”, and you can see that each namespace contains individual pods (a pod is a collection of one or more containers) for lms, cms, worker threads, forum, miscellaneous jobs, and a handful of system-related pods. To be sure, all of these pods were created by Tutor, in mutually exclusive deployments. I safely deployed each of these environments into the same cluster using Tutor, and with no risk whatsoever of these environments accidentally bleeding into one another. That simply does not happen with Kubernetes. You can see from a cursory review of the “AGE” column that some of these containers have been running for upwards of a year. In fact, this screen shot presents fewer than half of the pods running on this cluster, which in reality is running more than 50 pods right now. The “NODE” column indicates on which AWS EC2 instance each container is running which in point of fact is completely irrelevant to me because Kubernetes takes care of all of this and with absolutely no human intervention whatsoever. If a server node was to fail, which does happen on occasion, Kubernetes would simply spin up a replacement EC2 instance and redeploy the containers.

You could obviously do this without Kubernetes, and I have. However, you’d need at least twice the number of EC2 instances, and your workloads would not be as well-balanced nor as stable, and you’d need to take care of Ubuntu updates and up-time yourself.

So, summarizing with the help of the illustration below. The “Traditional Deployment” would represent an on-premise installation of Open edX to a single physical server, the “Virtualized Deployment” would represent a partially horizontally-scaled installation of the same platform but running on virtual server instances instead of bare hardware, and the “Container Deployment” would represent our Kubernetes cluster above.

But wait, there’s more!

Stability

Kubernetes actually provides you with multiple ways to improve your up-time. Like I mentioned above, Kubernetes will intervene in the event of an all-out EC2 instance failure in which a computing node becomes completely unresponsive. But it also does the same thing if an individual pod becomes unresponsive. In fact, Kubernetes is designed to manage multiple replicas of the same service; as many as you think you might need. For example, you could deploy the LMS service as a collection of six replicas, and Kubernetes will deploy and distribute the six resulting pods across as many EC2 instances as it can. If one of these individual pods fails then Kubernetes will automatically replace it. Kubernetes can even scale these pods automatically based on real-time workload.

Scalability

Kubernetes also has a platform-agnostic way to scale the computing nodes that makeup the cluster. In the case of AWS these are EC2 instances, but Kubernetes itself does not contain any AWS-specific product awareness. Instead, Kubernetes implements a concept called “Ingress” that goes well beyond the scope of this article and that you should read more about. Technically speaking, an ingress’ sole role is to expose your cluster to the outside world, like say, to open port 80 to the public so that your cluster can receive web requests. That’s why it’s called an ingress. get it? Incidentally however, in the case of AWS at least, this is exactly where EC2 load balancers come into play, and thus the two topic kind of merge together.

Vastly summarizing, it is common for a Kubernetes cluster running on AWS EKS to also have a companion AWS Load Balancer that manages the number of EC2 nodes based on real-time workload. This is a powerful combination that gives your Open edX installation immense scale.

Cost Efficiency

In addition to my comments in the previous section about how Kubernetes orchestrates workloads, there are a couple of other ways that you can bring even more efficiency to a Kubernetes cluster. First, you can be incredibly specific about what kinds of EC2 instances are running in the cluster, and when, and why. For example, you might launch your cluster with a pair of modest-sized t3-large compute nodes but scale the cluster with t3-2xlarge instances. That would be a sensible configuration if your Open edX platform is idle for extended periods of time but prone to sporadic bursts of student activity. Second, you can create multiple node groups inside of a cluster made up of whatever combinations of EC2 instance types you like, and, you can use selectors to direct different kinds of pods to different node groups. This would enable you for example, to run LMS pods on one kind of EC2 instance while LMS worker threads run on a different kind. And finally, in the case of AWS at least, they offer a “serverless” compute option called Fargate that is anecdotally kind of the same thing as Lambda, in which you can deploy pods onto serverless compute nodes that are managed by the Kubernetes cluster.

How to create a Kubernetes cluster

You have two choices that I know of, and they’re both anticlimactic. Honestly, there’s not much to it. Tutor and AWS combined have created what very well might be the easiest new-fangled IT gadget on-ramp you’re ever going to encounter.

From the AWS Console web app

The easy, 1-click way to create a cluster is to use the AWS console. Navigate to Elastic Kubernetes Service (Amazon EKS), click the “Add cluster” button, give it a name, and voilá, a new cluster appears about 5 minutes later.

With Terraform

If you’re more ambitious then you should check out the Terraform Kubernetes EKS module in Cookiecutter Openedx Devops. This is way out of scope for this article, but I promise to write more about it soon.

Note

Cookiecutter OpenedX Devops is a completely free open source tool that helps you to create and maintain a robust, secure environment for your Open edX installation. The Github Actions workflow we review below is only one of the many fantastic devops tools that are  provided completely free by Cookiecutter OpenedX Devops. If you want, you can follow the README instructions in the Cookiecutter to create your own repository, pre-configured with your own AWS account information, your Open edX platform domain name and so on. It’s pretty easy.

Kubernetes cluster administration

Generally speaking, Kubernetes is a mostly hands-off service. But having said that, there are certain things that you’ll need to do as an administrator, primarily during the setup process. Most real devops professionals with whom I’ve worked use kubectl and/or k9s. But I’ll show you all three ways here, in the name of completeness.

With kubectl

There’s an API CLI deeply rooted at the heart of Kubernetes, deftly named kubectl that you’ll want to install on your local dev environment. Find your platform on the official download site, follow the instructions, and have at it! To save you some time, a couple of words of guidance on getting kubectl setup. There are three data items located in the AWS EKS console page, near the bottom of the Configuration -> Details section of your new Kubernetes cluster which you’re going to need in order to get kubectl running on your dev environment. Additionally, just an fyi that you are the only person who will be able to see or interact with the cluster until you explicitly grant access to other users; not even other AWS account admins nor the root account will be able to see the cluster until you (the cluster creator) grant access.

Once you get kubectl running on your dev environment you’ll be able to do anything which a Kubernetes cluster is capable, from the command line. Granted, it’s pretty crude, but this is how devops pros prefer things. I guess.

With k9s

k9s is the Cadillac of Kubernetes cluster administrator tools. It’s hipster retro UI is elegantly crafted from only the finest 8-bit ASCII characters and it’s on-screen help provides you with only what you really, really, really, REALLY …… really must know, and not a single thing more. Aesthetics aside, it’s an amazing tool that basically puts a lightweight GUI on top of kubectl. It has broad community support and it works exceptionally well. I’ve found it to be indispensable while I continue up the learning curve of using kubectl. To whet your appetite, from the screen below, I’m one keypress away from viewing the MySQL root password, which is stored in Kubernetes Secrets and easily referenced from k9s.

It is absolutely true to say that anything you can do in k9s can also be done with kubectl. On the other hand it might also be true that anything you can do with kubectl you can also do with k9s, but I don’t know that for a fact. By way of example, here’s another common k9s screen that displays all of the pods for the namespace “openedx” and in the foreground is a terminal window presenting the same data by calling kubectl from the command line: kubectl get pods -n openedx.

Hence, the kubectl command line is pretty simple and easy to learn, and very convenient once you get up the learning curve. But k9s’ ASCII art!!!! just say’n.

From the AWS Console web app

The AWS console for EKS is nearly useless. That’s why devops folks prefer kubectl and k9s. I cannot think of anything useful that it does beyond providing a way for you to copy the API service endpoint, Certificate authority, and Cluster ARN which you need for configuring kubectl. But even that is debatable. You can use the AWS CLI to do the same thing, like this: aws eks --region us-east-1 update-kubeconfig --name prod-stepwisemath-mexico, which saves you from having to login to the console, find EKS and drill down 4 more screens. Plus, the aws cli actually stores the three parameters inside the kubectl config file for you, automagically.

Kubernetes api tips & tricks

I’ve been working with Kubernetes for around a year, and truth be told I mostly just need to keep the clusters healthy and running. I’m mostly not doing anything daring nor ambitious. That said, I lean heavily on k9s and kubectl to do everything. Following is a k9s cheat sheet, understanding that for anything I show you here, there’s a command-line equivalent that you can look up in the kubectl online help site.

  • menu: type “:” (minus the quotes) to raise the command menu. From there, type any of the following commands
  • namesapces: presents a complete list of the namespaces that exist in cluster. Choosing one of these will cause any other screen in k9s to automatically filter its contents to include only the namespace that you selected
  • contexts: a context is a fancy word for a cluster. You can administer more than one cluster with k9s. I manage five. Choose one, k9s automatically filters all other screens accordingly.
  • secrets. Kubernetes can store secrets. It’s very useful. You should read about it. Cookiecutter Open edX Devops tools makes extensive use of this capability. Highlight any secret and press “x” to un-encrypt and view.
  • pods. You get a list of every pod contained in the context / namespace that you selected. Surprised? Perhaps more interestingly, it’s from this screen that you can tail logs and shell into a terminal window for each pod. Ok, how about now? Surprised!!??!!!
  • services. You get a list of every service for the context / namespace you selected. More interestingly, you can edit the service manifests directly from this screen.
  • ingress. Ditto. You get a list of ingresses which you can directly manage from this window.
  • jobs. Ditto.

Here’s a link to the official command reference for kubectl (and k9s): https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands. It does a lot. Lots more than what I’m showing you here.

Good luck on next steps with adding a Kubernetes cluster to your Open edX installation!! I hope you found this helpful. Contributors are welcome. My contact information is on my web site. Please help me improve this article by leaving a comment below. Thank you!

The post Running Open edX At Scale With Kubernetes appeared first on Blog.

]]>
https://blog.lawrencemcdaniel.com/running-open-edx-on-kubernetes/feed/ 0
Continuous Integration (CI) With Tutor Open edX – Part I https://blog.lawrencemcdaniel.com/implementing-tutor-open-edx-continuous-integration-ci-part-i/?utm_source=rss&utm_medium=rss&utm_campaign=implementing-tutor-open-edx-continuous-integration-ci-part-i https://blog.lawrencemcdaniel.com/implementing-tutor-open-edx-continuous-integration-ci-part-i/#respond Mon, 11 Apr 2022 16:08:29 +0000 https://blog.lawrencemcdaniel.com/?p=5307 Tutor, Github Actions and AWS Elastic Container Registry are a powerful trio of tools for creating an automated CI process to build and register your custom Open edX Docker image, and automating the entire process is easy. This is part I of a two-part series on implementing CI/CD processes

The post Continuous Integration (CI) With Tutor Open edX – Part I appeared first on Blog.

]]>

Tutor, Github Actions and AWS Elastic Container Registry are a powerful trio of tools for creating an automated CI process to build and register your custom Open edX Docker image, and automating the entire process is easy.

This is part I of a two-part series on implementing CI/CD processes with Tutor Open edx. In this first part we’ll automate a Tutor Open edX build. In part II we’ll learn how to deploy this build.

In this article I’ll explain the key pieces of this fully-functional Github Actions Build workflow, which does the following:

  1. builds a custom Open edX Docker image using a custom fork, a custom theme, an Open edX plugin, and one Xblock,
  2. registers the custom Docker image in AWS Elastic Container Registry.

With only minor modifications you can tailor this workflow to automate the build of your own Open edX installation.

Note

The code repository referenced in this article was generated with Cookiecutter OpenedX Devops, a completely free open source tool that helps you to create and maintain a robust, secure environment for your Open edX installation. The Github Actions workflow we review below is only one of the many fantastic devops tools that are  provided completely free by Cookiecutter OpenedX Devops. If you want, you can follow the README instructions in the Cookiecutter to create your own repository, pre-configured with your own AWS account information, your Open edX platform domain name and so on. It’s pretty easy.

Using CI/CD Tools with Tutor Open edX

Tutor was formally introduced to the Open edX community at the 2019 annual Open edX conference in San Diego, California. It is a Docker-based build, configuration and deployment tool that greatly simplifies the complexity and the knowledge base that is required on your part to manage an Open edX installation. Tutor became the official Open edX installation tool beginning with the Maple release in fall of 2021. In this article we’ll focus on Tutor’s build function, which provides a 1-click way of creating a custom Docker image (aka “container”) of your Open edX installation. Tutor’s build function assembles all of the source code repositories and support libraries for your Open edX installation into a single Docker container which can then be deployed pretty much anywhere you like. It does the following:

  • download github.com/openedx/edx-platform, or alternatively, a fork of this repository
  • download all dependent source code repositories, noting that the code within edx-platform references many other repos
  • download run times for Python, Django, NodeJS, React and all of the many other systems libraries. There are many
  • download all Python PyPi requirements. There are many
  • download any custom XBlocks that you want to add to your Open edX installation
  • download your Open edX plugin, if you have one
  • download your custom theme, if you have one
  • compile static assets
  • move all of these components into a Docker containerized format

That’s a lot of steps, and to be sure, this the vast majority of what happens under the hood during a traditional native installation process for versions of Open edX prior to Maple. It would seem miraculous that this doesn’t result in complete anarchy in light of the fact that the combined code base inside the Docker container is maintained real-time by dozens of different developer teams from different organizations who mostly don’t directly communicate with each other. But, the reality is that you can repeat the build process, achieving the exact same result each time because all of these repositories use semantic versioning, and Open edX and Tutor pin all of the versions on which they each depend.

The benefit of building a Docker image as opposed to installing the same software directly onto an Ubuntu instance is that you build the container once and then store it in a container Registry — AWS ECR in our case but there are many alternatives — and then afterwards you can deploy it anywhere pretty easily using simple Tutor commands.  I’m not going into any detail about the Tutor build itself because it’s a comparatively simple operation that is already very well documented. Contrastly, this article focuses on how to combine the Tutor build procedure with other open source tools to implement robust continuous integration and continuous delivery (CI/CD) processes for your Open edX installation. I’ve been using this methodology on my larger Open edX sites for about a year now and it works great. In this article we leverage Github Actions to fully automate a Docker build, but you could use any other CI platform.

Now then, before we dive into the build workflow, I want to digress for a moment on why incorporating CI is beneficial. GitHub Actions is a popular and mostly-free CI/CD platform that allows you to automate your build, test, and deployment pipeline. Docker itself is highly conducive to the general principal of CI/CD. Github Actions can be triggered to run automatically upon, for example, any pull request to any repository that is part of your Open edX image. I became a fan of Github Actions about 18 months ago while working as part of a team on a large installation. It speeds up and simplifies the development pipeline for all of the team members by automating tasks such as kicking off unit tests each time code is pushed to a repository. It’s coded in yaml format and is very easy to learn and to read. It’s stored inside of your repository, right alongside your code and configuration data. It provides consistency in the build and deployment pipelines, especially when there are many steps to your build, like in the example we’re going to review below. It provides granular role-based permissions to your team and your systems user accounts allowing you to harden security around your deployment work flows. It provides a great set of tools for managing passwords and other sensitive data. and finally, it generates logs of each of your deployments which is enormously helpful when you need to trouble shoot something. So, in a few words, it’s valuable technology that you should consider adding to your repertoire.

Github Actions Workflow

The example Github Action Build workflow uses Tutor to build a custom Open edX Docker image and then upload it to AWS Elastic Container Registry (ECR).

We’re going to use this Github Actions workflow to automate the following operations

  • setup our workflow environment: create a virtual instance of Ubuntu and then install Tutor and the aws-cli
  • authentication to the aws cli using a special AWS IAM user account named ci. The key and secret are stored in Github Secrets in the same repository. We’ll cover this in more detail below
  • leverage a prebuilt Github Action named actions/checkout@v2 for downloading all of the code repositories
  • leverage a prebuilt Github Action named docker/setup-buildx-action@v1 to manage the Docker build
  • leverage a prebuilt Github Action named aws-actions/amazon-ecr-login@v1 to manage our interactions with AWS ECR
  • configure Tutor
  • build a Docker image
  • push it to AWS ECS

I should point out that there are many prebuilt Github Actions and in general the big vendors like AWS, Azure, Google and Digital Ocean all provide high quality prebuilt Github Actions to facilitate integrations into their respective platforms. This Github organization alone maintains nearly 50 production-ready actions that do anything from setting up Python and virtual environments for you to speeding up your workflow with caching. Our example build workflow is pretty well documented, so I’m going to spend the remainder of this article explaining a few of the recurring patterns that you’ll encounter in this code.

Layout of a Github Actions workflow

The entire workflow is written in yaml using a limited set of commands that you can easily learn from this Getting Started guide. The workflow runs on a Github-hosted virtual server instance. Github gives you 2,000 minutes of server time for free each month which should be more than you need in most cases. The example build workflow in this article consumes around 35 minutes each times it runs, and I usually run the work flow only a couple of times a month at most. The server instances are ephemeral and are destroyed immediately upon completion of the build workflow. You therefore have to create your entire build environment each time the workflow runs.

Per the screen shot below, this workflow runs on “workflow_dispatch” (row 16) which is a lofty way of saying that it runs when you click the “Run” button from the Github Actions console page of your repository on the github.com site. We define our workflow environment (on row 20) as an Ubuntu 20.04 server on which we’ll need to install Tutor and AWS CLI at the beginning of the workflow. In this section we also define a few environment-wide variables that are referenced throughout the remaining code, namely, the unique identifier for our container in AWS ECR.

More about steps

I mostly learned about steps by looking at sample code. Fortunately, this example workflow contains a broad mixture of most of the kinds of things that you’ll want to do with your own workflow, and so it should be a pretty good starting point. The screen shot below demonstrates a couple of use cases of steps created with prebuilt actions along with one example of a multi line command format.

Running tutor from inside a Github Actions workflow

Interacting with Tutor inside of a Github Actions workflow is pretty straightforward once you’ve seen a few examples. It obviously requires that you first understand the exact steps that you’d execute from the command-line, which you can learn more about here: https://docs.tutor.overhang.io/. Thereon, you mostly just need to see working examples of the syntax for how to code different use cases.

For example, these two steps in the screen shot below build the image and then push it to AWS ECR. At a glance, it’s pretty intuitive. The build process takes around 35 minutes by the way, which is another reason why it’s a great idea to use Docker so as to minimize the occasions when you have to endure this long-running process.

Managing passwords and other sensitive data

Github Actions provides an excellent way for you to integrate passwords and other sensitive data into your workflow without risk of it leaking into the public domain. See “Settings” -> “Secrets” -> “Actions” in your repository for the console screen where you can define and store all sensitive data that you need to integrate into your workflow. In the example workflow we use a single key pair for the AWS CLI. There’s a 2nd key pair defined in the secrets section of the repository, AWS_SES_IAM_KEY / SECRET that is used elsewhere in the repo, but not this particular workflow. Lastly, we define a Github Personal Access Token (PAT) that determines the workflow’s permissions within Github Actions itself during execution. See the screen shot above for example syntax on how to reference this data from within the Github Action workflow: “${{ secrets.THE-NAME-OF-YOUR-SECRET }}”

Alert

Keeping your AWS credentials away from prying eyes is serious business. Take a look at this article for a firsthand accounting of the horrors that await you if your ci credentials ever leak into the public domain by, for example, you accidentally pushing these to Github, “Enemy at the Gates“.

Running the Github Actions workflow

Once you add a workflow to your repository its Github Actions console page will magically reformat itself into something similar to the screen shot below, noting of course that the “Run Workflow” button appears because we explicitly included the command, “on: [workflow_dispatch]” on row 18.

Each run of the workflow contains a detailed timestamped log of the console output from the Ubuntu instance. It is neatly organized by the text name of each step in the the workflow, on which you can click to drill-down to see the detailed output.

Good luck with automating your Open edX build!! I hope you found this helpful. Contributors are welcome. My contact information is on my web site. Please help me improve this article by leaving a comment below. Thank you!

The post Continuous Integration (CI) With Tutor Open edX – Part I appeared first on Blog.

]]>
https://blog.lawrencemcdaniel.com/implementing-tutor-open-edx-continuous-integration-ci-part-i/feed/ 0
Continuous Integration (CI) With Tutor Open edX – Part II https://blog.lawrencemcdaniel.com/implementing-tutor-open-edx-continuous-integration-ci-part-ii/?utm_source=rss&utm_medium=rss&utm_campaign=implementing-tutor-open-edx-continuous-integration-ci-part-ii https://blog.lawrencemcdaniel.com/implementing-tutor-open-edx-continuous-integration-ci-part-ii/#respond Sun, 10 Apr 2022 21:45:14 +0000 https://blog.lawrencemcdaniel.com/?p=5216 Tutor provides a powerful and easy to use set of tools for advanced configuration of your Open edX installation. Lets take a closer look at how you can automate your entire Open edX deployment process using Github Actions workflows. This is part II of a two-part series on implementing

The post Continuous Integration (CI) With Tutor Open edX – Part II appeared first on Blog.

]]>

Tutor provides a powerful and easy to use set of tools for advanced configuration of your Open edX installation. Lets take a closer look at how you can automate your entire Open edX deployment process using Github Actions workflows.

This is part II of a two-part series on implementing CI/CD processes with Tutor Open edX. In part I of this series we learned how to automate the build process. In this article we’ll learn how to deploy the image that we built in part I. first we’ll look at a working, fully-automated Open edX deployment script using Github Actions, and then we’ll discuss how you can customize this work flow to suit your needs. we’ll be using this fully-functional Github Actions Deployment workflow that comes from the same repository as the workflow from part I.

Note

The code repository referenced in this article was generated with Cookiecutter OpenedX Devops, a completely free open source tool that helps you to create and maintain a robust, secure environment for your Open edX installation. The Github Actions workflow we review below is only one of the many fantastic devops tools that are  provided completely free by Cookiecutter OpenedX Devops. If you want, you can follow the README instructions in the Cookiecutter to create your own repository, pre-configured with your own AWS account information, your Open edX platform domain name and so on. It’s pretty easy.

Some Background About Tutor and Github Actions

Tutor provides two distinct means of modifying the default configuration of your Open edX instance. First, it gives you a way to modify any of the hundreds of Open edX application parameters found in the edx-platform environment configuration files such as edx-platform/lms/envs/common.py and production.py. Just follow these well-written instructions on how to use the Tutor command line to configure your Open edX platform. Additionally, it gives you a way to create your own custom Docker image containing for example, additional Xblocks, a custom theme, an Open edX plugin, or you can even choose your own fork of the edx-platform source code repository. Creating a custom Docker image is easier than it might seem, and the procedure is well-documented here. In this article we’ll look at some common use cases of both of these for customizing your Open edX platform configuration, and, we’ll leverage Github Actions to fully automate and properly document our steps.

Importantly, you’ll also see in this example that Tutor is a sophisticated deployment tool that provides out-of-the-box support for Docker and for Kubernetes, which is an amazing and under-hyped capability. For context, Tutor does not simply deploy a single Open edX container. In fact, it splits the Open edX application suite into separate pods for lms, cms, one of more worker threads for each application, e-commerce, Discovery service, and so on. And, you can then individually administer and optimize the behavior and performance of these pods in your production cloud infrastructure environment. Furthermore, Tutor configures and deploys all of the back end services such as MySQL, MongoDB, Redis, and Nginx (or Caddy). All told it potentially deploys upwards of a dozen different kinds of containers, either into Docker installed on an Ubuntu instance or into a Kubernetes cluster.

Now then, before we dive into our deployment, I want to digress for a moment on why we’re using Github Actions for this exercise. GitHub Actions is a continuous integration and continuous delivery (CI/CD) platform that allows you to automate your build, test, and deployment pipeline. Github Actions can be triggered to run automatically upon, for example, every pull request to your repository. I became a fan of Github Actions about 18 months ago while working as part of a team on a large installation. It speeds up and simplifies the development pipeline for all of the team members by automating tasks such as kicking off unit tests each time code is pushed to a repository. It’s coded in yaml format and is very easy to learn and to read. It’s stored inside of your repository, right alongside your code and configuration data. It provides consistency in the build and deployment pipelines, especially when there are many steps to your build, like in the example we’re going to review below. It provides granular role-based permissions to your team and your systems user accounts allowing you to harden security around your deployment work flows. It provides a great set of tools for managing passwords and other sensitive data. and finally, it generates logs of each of your deployments which is enormously helpful when you need to trouble shoot something. So, in a few words, it’s valuable technology that you should consider adding to your repertoire.

Github Actions Workflow

The example Github Action workflow uses Tutor to deploy a custom Open edX Docker image that was previously uploaded to AWS Elastic Container Registry (ECR). Additionally, our backend has been horizontally scaled and leverages several AWS managed services like AWS Relational Database Service (RDS), AWS Document DB for MongoDB, and AWS Elasticache for Redis, and AWS Simple Email Service (SES) for SMTP, a Tutor plugin named hastexo/tutor-contrib-s3 that offloads file management from the Ubuntu file system to a secure AWS S3 bucket. Lastly, our example Github Action workflow deploys into AWS Elastic Kubernetes Service (EKS) which is also where all of our back end credentials are stored. Incidentally, for this example, all of these backend services were created using fully-automated Terraform modules that are included in Cookiecutter OpenedX Devops.

Keep in mind that all of these backend services are already up and running. During deployment we simply need to configure our Open edX applications to connect to these already-existing remote services rather than to the default “locally” hosted services. Also for the avoidance of any doubt, the practical theory surrounding how to scale Open edX’s backend services is substantially the same regardless of whether you’re running on Docker or using a native build.

We’re going to use this Github Actions workflow to automate the following operations

  • setup our workflow environment: create a virtual instance of Ubuntu and then install Tutor, aws-cli, and kubectl
  • authentication to the aws cli using a special AWS IAM user account named ci
  • authentication to kubectl using credentials that we’ll retrieve with the aws cli
  • retrieve connection parameters and account credentials from Kubernetes Secrets for all of the remote backend services, and then format these into valid Tutor Open edX parameters.
  • format and merge all of our custom lms.env.json and cms.env.json parameter values
  • configure hastexo/tutor-contrib-s3
  • set our Open edX custom theme
  • deploy our Open edX installation to a Kubernetes cluster

The workflow is pretty well documented, so I’m going to spend the remainder of this article explaining a few of the recurring patterns that you’ll encounter in this code.

Layout of a Github Actions workflow

The entire workflow is written in yaml using a limited set of commands that you can easily learn from this Getting Started guide. The workflow runs on a Github-hosted virtual server instance. Github gives you 2,000 minutes of server time for free each month which should be more than you need in most cases. The example workflow in this article consumer between 4 and 9 minutes each times it runs, and I usually run the work flow a few dozen times a month at most. The server instances are ephemeral and are destroy immediately upon completion of the workflow. You therefore have to build your entire deployment environment each time the workflow runs.

Per the screen shot below, this workflow runs on “workflow_dispatch” (row 18) which is a lofty way of saying that it runs when you click the “Run” button from the Github Actions console page of your repository on the github.com site. We define our workflow environment (on row 22) as an Ubuntu 20.04 server on which we’ll need to install Tutor, AWS CLI and kubectl at the beginning of the workflow. In this section we also define a few environment-wide variables that are referenced throughout the remaining code.

More about steps

I mostly learned about steps by looking at sample code. Fortunately, this example workflow contains a broad mixture of most of the kinds of things that you’ll want to do with your own workflow, and so it should be a pretty good starting point. The screen shot below demonstrates single line and multi line command formats. It also shows an example of how to incorporate community-supported code blocks. In this case, we’re installing kubectl, the Kubernetes command-line interface (cli) with a code block that is written and supported by Microsoft Corp’s Azure team.

tutor config from inside a Github Actions workflow

here’s an example of the syntax for setting multiple Open edX configuration parameters using tutor config. Also, you should note the syntax in this screen shot for referencing a Github Secret; in this case, an AWS IAM key and secret. And also note how the environment variable “TUTOR_RUN_SMTP” is declared and then redirected to $GITHUB_ENV, a variable that is declared by Github Actions itself and that in our case contains all of the Ubuntu environment variables that are declared over the life of this workflow.

Managing passwords and other sensitive data

Github Actions provides an excellent way for you to integrate passwords and other sensitive data into your workflow without risk of it leaking into the public domain. See “Settings” -> “Secrets” -> “Actions” in your repository for the console screen where you can define and store all sensitive data that you need to integrate into your workflow. In the example workflow we define two key pairs, one for the AWS CLI and another for the AWS SES SMTP email service. We additionally define a Github Personal Access Token (PAT) that determines the workflow’s permissions within Github Actions itself during execution. See the screen shot above for example syntax on how to reference this data from within the Github Action workflow: “${{ secrets.THE-NAME-OF-YOUR-SECRET }}”

For the example Github Actions workflow, most of the password data is stored in Kubernetes Secrets, thus the only secrets that we need to manage in Github Secrets are the AWS IAM key pairs for the aws cli and for connecting to the AWS SES SMTP email service.

Running the Github Actions workflow

Once you add a workflow to your repository its Github Actions console page will magically reformat itself into something similar to the screen shot below, noting of course that the “Run Workflow” button appears because we explicitly included the command, “on: [workflow_dispatch]” on row 18.

Each run of the workflow contains a detailed timestamped log of the console output from the Ubuntu instance. It is neatly organized by the text name of each step in the the workflow, on which you can click to drill-down to see the detailed output.

Here’s the console output for the “Deploy Tutor” command on row 281:

Tutor Open edX Configuration Data

The Open edX configuration data is stored in a few locations depending on its size, its sensitivity, and how Tutor consumes it. Password data for Kubernetes is explained in detail below, in the next section. You can also see from a cursory review of the deployment workflow itself that a considerable number of Tutor parameters are defined and set within this single piece of code. The remaining configuration data — the vast majority of the configuration data that is — is stored in the repository in ci/tutor-deploy/environments/prod/. In particular, the file settings_merge.json contains most of the Open edX application settings variables that you should at least be aware of. And remember, all of the files in this folder were created automatically by Cookiecutter OpenedX Devops.

Note that the entire contents of the two Tutor configuration files lms.env.json and cms.env.json are dumped to the console multiple times during the workflow using the built-in Linux command “cat”. See rows 163, 240, 254, 265 and 274.

Kubernetes

The example Github Actions workflow deploys into a Kubernetes cluster which while not required, certainly provides a lot of system management benefits. If you are considering deploying to a Kubernetes cluster then you might find the following additional explanations helpful.

kubectl

The Kubernetes command-line tool, kubectl, allows you to run commands against Kubernetes clusters. You can use kubectl to deploy applications, inspect and manage cluster resources, and view logs. The example Github Actions workflow uses this tool extensively, primarily for extracting password data from Kubernetes Secrets. For more information including a complete list of kubectl operations, see the kubectl reference documentation.

To connect to a Kubernetes cluster, kubectl depends on a configuration file named kubeconfig which in the example workflow is being retrieved from AWS EKS via the aws cli on line 57.

 - name: Get Kube config
run: aws eks --region us-east-2 update-kubeconfig --name prod-stepwisemath-mexico --alias eks-prod

This very important 1-line command retrieves the connection data for your Kubernetes cluster and simultaneously formats it into a valid kubeconfig file and persists it on the Github ephemeral Ubuntu instance. Incidentally, the Kubernetes configuration data that is retrieved by this command is visible from the AWS EKS Console page for your cluster. Note the three data fields outlined in red at the bottom of the screen shot below.

Kubernetes Secrets

For the example Github Actions workflow we store configuration data for Open edX in multiple locations depending on a few factors. Open edX passwords for backend services like MySQL, MongoDB, and SMTP email are stored inside of Kubernetes Secrets and then retrieved using commands like the following

- name: MySQL
        run: |-
          echo "TUTOR_RUN_MYSQL=false" >> $GITHUB_ENV
          kubectl get secret mysql-root -n $NAMESPACE  -o json | jq  '.data | map_values(@base64d)' | jq -r 'keys[] as $k | "TUTOR_\($k|ascii_upcase)=\(.[$k])"' >> $GITHUB_ENV
          kubectl get secret mysql-openedx -n $NAMESPACE  -o json | jq  '.data | map_values(@base64d)' | jq -r 'keys[] as $k | "TUTOR_\($k|ascii_upcase)=\(.[$k])"' >> $GITHUB_ENV

Keep in mind that you can run these commands from your own computer, assuming that you’ve installed and configured kubectl. Noting that many of the Kubernetes commands are in fact, multiple piped commands. One of the more common patterns that you’ll find retrieves, decrypts and reformats password data, as follows:

Adding the next pipe, this encrypted data transforms into the following:

And then adding all of the pipes together, the decrypted password data becomes a Tutor-compliant command line parameter:

Kubernetes Ingresses

Kubernetes ingresses for Open edX are handled at deployment because some of the stateful data, like Letsencrypt ssl certificates for example, can only be created at the moment of deployment. Row 181, “Create Kubernetes add-on resources” provides an entry point for kubectl to read a collection of Kubernetes manifests that we’ve stored in the repository in the folder ci/tutor-deploy/environments/prod/k8s/.

- name: Create Kubernetes add-on resources
        run:  |-
          # Create kubernetes ingress and other environment resources
          kubectl apply -f "ci/tutor-deploy/environments/$ENVIRONMENT_ID/k8s"

Good luck with automating your deployment!! I hope you found this helpful. Contributors are welcome. My contact information is on my web site. Please help me improve this article by leaving a comment below. Thank you!

The post Continuous Integration (CI) With Tutor Open edX – Part II appeared first on Blog.

]]>
https://blog.lawrencemcdaniel.com/implementing-tutor-open-edx-continuous-integration-ci-part-ii/feed/ 0
Adding React To A Django Project https://blog.lawrencemcdaniel.com/adding-react-to-a-django-project/?utm_source=rss&utm_medium=rss&utm_campaign=adding-react-to-a-django-project https://blog.lawrencemcdaniel.com/adding-react-to-a-django-project/#respond Fri, 28 Jan 2022 03:36:23 +0000 https://blog.lawrencemcdaniel.com/?p=5097 Learn the right way to combine Django and ReactJS. This article includes links to a public repository with a fully documented, fully-functional reference project that you can setup in less than an hour and run in your local dev environment using Docker. If you’re looking for guidance on how

The post Adding React To A Django Project appeared first on Blog.

]]>

Learn the right way to combine Django and ReactJS. This article includes links to a public repository with a fully documented, fully-functional reference project that you can setup in less than an hour and run in your local dev environment using Docker.

If you’re looking for guidance on how to create React pages inside your Django project then you’re in the right place, so please keep reading. I maintain a fully-functional reference project in Github with quick start instructions to get it running locally in your development environment. I highly recommend that you take a look at this repo as part of your research. There are a number of small details codified in the repo which go unmentioned here for the sake of brevity. This article is really just supplemental narrative that explains my design objectives and implementation strategy.

In a nutshell, I wanted seamless integration between React and Django such that the build and test tools that ship with both technologies continue to work without any interference being caused by the presence of either technology. And additionally, I wanted both technologies to not require any special configuration in production. I believe that the method outlined in this article achieves these design goals, and it bears mentioning that as of this writing I’ve not seen this strategy described elsewhere on the Internet. I’m reasonably sure that all of the following are accomplished.:

  • Seamless integration between React and Django.
  • You can use Django for REST API, user authentications, site headers & footers, admin, and testing
  • You can use React on any pages in your project of your choosing.
  • All of the build and test tools for both Django and React will continue to work, with no special configuration required.
  • You can include popular, complex tooling in your React stack such as Redux, React Router, Bootstrap, etcetera.
  • All of the native optimizations in “npm run build” like page chunking will continue to work.
  • You can manage all of your code in a single repository

Sample Output

Here’s an example page rendered using this integration methodology. Thanks to name spacing in React along with its quite-impressive build tool, getting React content to appear inside a Django-rendered page is as simple as including the css and js optimized build bundles, and including a div in the html body with an id value that matches that specified in your src/index.js file which in this case is, document.getElementById('reactjs-root'). Note the following:

  • the page header and footer are generated by Django
  • the Django debug toolbar is present on the righthand sidebar
  • the page itself requires authentication which is managed by Django
  • the black interior region is React-created content

Furthermore, as you can see from the html source to the right, the React content is being rendered normally, with the optimized css and js bundles created by React’s npm run build  added to the head section of the DOM, and the standard React hook added to the body element.

To verify that the build chain actually works I’ve modified frontend/src/App.js, adding one image and one line of text. Here’s a link to an archived copy of the complete Django template-rendered html that’s summarized on the right side of the illustration below.

A Bit of Theory

Before we get into the specifics of the technical implementation it behooves us to highlight a couple of React’s architectural principals that weigh on our strategy. First, React handles all file system addressing relative to a project’s root folder, and thus, we can place the root folder anywhere in the file system and React’s build tools will work fine. This should already be apparent to you from your experience building standalone React apps, and this matters in our case because we need to place React’s build output inside of a special Django app that will serve all React content.

Second, a production React build consists entirely of static files, and thus, nearly all of your React content will be served by Django’s built-in django.contrib.staticfiles. We simply need to add some configuration settings to config/settings/base.py to enable staticfile to find the React build. I say nearly all because we’ll ignore React’s build/index.html file at run-time in favor of Django’s templating system, which leads us to the crux element of this integration strategy:

Key Concept

For any Django template-rendered page which we want to include React content we need to implement a Django view and template that adds React’s optimized css and js bundled entry points into the head of the DOM, and adds a div to the body element with id=”reactjs-root”.

Lastly and to that end, we need to understand a couple of superficial aspects of React’s build output (see illustration to the right) that is generated by the command, npm run build or alternatively, yarn run build. The structure of React’s build output is fixed and has some defining features such as the folder structures and the locations and names of the css and js entry point bundles which we will reference in a Django view later in this article.

Of particular importance to us is the output file build/asset-manifest.json because its contents describe the relative file paths of the optimized css and js bundles that the build script created, and we need to add links to these two files in the head of our Django template output.

The screen shots below illustrate the relationships between the files in a React app build folder. The build/asset-manifest.json file contains the key elements that we will integrate into the Django templating system.

Here’s a larger image of asset-manifest.json.

Technical Implementation Strategy: Django + React Setup

At the risk of being repetitive I’ll kindly remind you that you can clone a fully-functioning repository of this implementation strategy, and that the code and copious documentation contained therein are really the best way to get up to speed on how what follows actually works.

In a blank Django project scaffolded with CookieCutter Django we’re going to do the following:

  • Create a Django app named “frontend”. This app only needs three elements: apps.py, views.py, and a templates folder. The page links point to the actual source code for each element. The app will functionally replace React’s index.html for purposes of generating React content in the browser.
  • From the root of the “frontend” Django app, create a new React app using the standard command line, npx create-react-app frontend. You should immediately run npm run build to create an initial set of build output for testing purposes. Afterwards we’ll consolidate the React project contents into the root of the Django app “frontend” as per the screen shot to the left. We don’t do anything special to this scaffolding, which is the entire point of this strategy.
  • Add a few things to the bottom of config/settings/base.py which again, you can peruse in detail by following the link.
  • Add a line to config/urls.py that points to our Django app “frontend”.

I additionally stripped down the contents of public/index.html to its bare essentials which, while not technically necessary, definitely helps to clarify what is really needed for this implementation. So just be aware that, if you clone my repository, you’ll see a quite stripped down version of what you’re accustomed to seeing when running the React dev server.

The screen shots below illustrate the relationships between the code objects that we’re either creating or referencing. Ultimately, what’s being doing in this integration strategy is laughably simple, once you understand it. Really, the only challenge to this entire strategy is that the work we’re doing is easily lost within the size and complexity of both Django and React. For your convenience I’m including links to each of the four files in the illustration.

Good luck with next steps on your project!! I hope you found this helpful. Contributors are welcome. My contact information is on my web site. Please help me improve this article by leaving a comment below. Thank you!

The post Adding React To A Django Project appeared first on Blog.

]]>
https://blog.lawrencemcdaniel.com/adding-react-to-a-django-project/feed/ 0
Customizing Service Workers in ReactJS Progressive Web Apps with Google Workbox https://blog.lawrencemcdaniel.com/customizing-service-workers-in-reactjs-progressive-web-apps-with-google-workbox/?utm_source=rss&utm_medium=rss&utm_campaign=customizing-service-workers-in-reactjs-progressive-web-apps-with-google-workbox https://blog.lawrencemcdaniel.com/customizing-service-workers-in-reactjs-progressive-web-apps-with-google-workbox/#comments Tue, 02 Nov 2021 18:43:04 +0000 https://blog.lawrencemcdaniel.com/?p=5008 Learn how to customize Google Workbox’s default behavior for a service worker in your React app. This articles explains how to modify the behavior of the service worker life cycle, enabling fully automated updates of your app in the background. Includes a bonus section with a React Bootstrap component alert for app update announcements.

The post Customizing Service Workers in ReactJS Progressive Web Apps with Google Workbox appeared first on Blog.

]]>

Learn how to customize Google Workbox’s default behavior for a service worker in your React app. This articles explains how to modify the behavior of the service worker life cycle, enabling fully automated updates of your app in the background. Includes a bonus section with a React Bootstrap component alert for app update announcements.

Setting Up A Service Worker

The smart way to get started with service workers is to use create-react-app, which provides a simple way to scaffold a basic service worker for a React app:

npx create-react-app hello-world --template cra-template-pwa

This quick start approach gives your site basic offline capabilities by adding two boilerplate js modules to the root of your project, service-worker.js and serviceWorkerRegistration.js. It also installs Google’s npm package workbox-sw that vastly simplifies everything about managing a service worker.

By the way, a couple of reference sources that I came across bear mentioning. First, the web site create-react-app.dev contains documentation that explains how to customize these two boiler plate files to optimize your site’s offline behavior. Additionally, if you’re just getting started with service workers then I’d recommend this blog post from Danielly Costa, “Showing ‘new version available’ notification on create-react-app PWAs“. It’s well-written, easy to follow, it’s relatively current as of this publication, and it’s what I used when I was just getting started.

Moving along, the boiler plate code adds the following to your React app:

  • Registers a service worker for your React app

  • Automatically detects and installs updates to your service worker and offline content

  • Sets up basic caching to enable performant offline viewing of your site’s static content

The only thing that I dislike is that this boiler plate does several things to bring attention to your site’s offline capabilities. I don’t want that. I want my site to be resilient with regard to Internet connectivity but I’d rather that my users don’t even notice when they’re offline. The less they notice, the better. Summarizing what you get versus what I actually wanted:

The default service worker life cycle

By default Workbox registers a service worker and then checks for and downloads any updates to your code base. At that point, it pauses the update process until all browser tabs and windows containing your app have been closed. Then, only upon the user’s next visit to your site does it actually update the locally cached content to activate these changes in the user’s browser. Additionally, just a slightly annoying side effect is that the boiler plate code sends messages to the javascript console any time the app is being accessed offline, and also when code is updated.

Versus what I wanted instead

By contrast, I not only want code updates downloaded and seamlessly activated but I also want the service worker to periodically check for updates say, at least once a day to ensure that users are never at risk of running dangerously outdated code.

I prototyped my desired behavior in my personal web site, lawrencemcdaniel.com (see the screen cast below). The source code is located in this Github repository in case you prefer to see the entire code base as opposed to the abbreviated code snippets that follow. I use the same code base for several other blog posts about ReactJS, Redux, REST api’s and front-end coding in general.

Some Editorial Comments On Hacking Google Workbox

I only had to modify a few files to achieve my desired behavior. The hard part, for me at least, was developing a technical understanding of what a service worker does, and then developing a commensurate understanding of how Google Workbox is trying to simplify how you work with them. With regard to the former, I found this documentation from Mozilla the most informative, “MDN Web Docs: Service Worker API“. Learning about Workbox on the other hand is more of a challenge. I ended up tracing the source code.

Service Workers are are powerful. The scaffolding that you get from create-react-app really only scratches the surface. Having said that, there are weird gaps in the Service Worker API. For example, there are events for ‘fetch’ and ‘install’ and ‘activate’ but not for ‘fetched’ and ‘installed’ and ‘activated’, even though there are defined states for these.

The service worker life cycle model is simple. A service worker has four possible states, which migrate as follows: installing, then installed, then activating, then activated. There’s also a general purpose exception state called redundant. The built-in wait period between ‘installed’ and ‘activating’ can be overriden by calling skipwaiting() (see an example of this in the default service-worker.js. You should note a couple of things however. First, it is recommended that your React app communicate with the service worker via a formal postMessage() api. And second, that regardless of how you call skipwaiting(), it will only have an effect if there is actually a installed worker in an ‘installed’ state.

Debugging service workers is a challenge. For one thing, they only run on your “production” build, which obviously complicates testing. But additionally, the very nature of service workers is that they directly control what version of your code you’re running, and that can get confusing. Lastly, its all event-driven which itself is challenging to trace and debug. My advise is to make copious use of console logging, as per the video screen cast above, so that execution threads are abundantly clear.

Summary of file modifications

Source File Summary of changes
index.js Remove boiler plate service worker registration
service-worker.js No modifications are necessary
serviceWorkerRegistration.js Add a hook for serviceWorkerRegistrationEnhancements
serviceWorkerRegistrationEnhancements.js New js module to implement onActivate event, plus, periodic update checks
App.js Refactor to class component. Add event handlers (callbacks) for Workbox service worker life cycle events. Add appUpdate components for announcements.
appUpdate React component to render Bootstrap announcements.

index.js

Remove the reference to serviceWorkerRegistration. We’re going to migrate this to App.js in the next step so that we can leverage the App component life cycle for rendering announcements about changes to the service worker lifecycle.

// If you want your app to work offline and load faster, you can change
// unregister() to register() below. Note this comes with some pitfalls.
// Learn more about service workers: https://cra.link/PWA

serviceWorkerRegistration.register();

service-worker.js

I did not need to make any changes to this file as part of these modifications. Note however that I did make extensive modifications to service-worker.js when setting up the cache behavior of my React app, though this is out of scope of this blog post.

serviceWorkerRegistration.js

I added a hook for my own behavioral enhancements, as follows:

import { serviceWorkerRegistrationEnhancements } from "./serviceWorkerRegistrationEnhancements";
 
// approximately row 57
function registerValidSW(swUrl, config) {
  navigator.serviceWorker
    .register(swUrl)
    .then((registration) => {
 
      // -----------------------------
      // my additional functionality
      // -----------------------------
      serviceWorkerRegistrationEnhancements(config, registration);
      // -----------------------------
 
      registration.onupdatefound = () => {
        const installingWorker = registration.installing;
        if (installingWorker == null) {
          return;
        }
        installingWorker.onstatechange = () => {
          if (installingWorker.state === 'installed') {
            if (navigator.serviceWorker.controller) {
              // At this point, the updated precached content has been fetched,
              // but the previous service worker will still serve the older
              // content until all client tabs are closed.
              if (DEBUG) console.log(
                'serviceWorkerRegistration.js - New content is available and will be used when all ' +
                  'tabs for this page are closed. See https://cra.link/PWA.'
              );
 
              // Execute callback
              if (config && config.onUpdate) {
                config.onUpdate(registration);
              }
            } else {
              // At this point, everything has been precached.
              // It's the perfect time to display a
              // "Content is cached for offline use." message.
              if (DEBUG) console.log('serviceWorkerRegistration.js - Content is cached for offline use.');
 
              // Execute callback
              if (config && config.onSuccess) {
                config.onSuccess(registration);
              }
            }
          }
        };
      };
    })
    .catch((error) => {
      console.error('Error during service worker registration:', error);
    });
}
// more boiler plate code follows .....

serviceWorkerRegistrationEnhancements.js

This new additional js module implements an `onActivate` event which App.js listens for in order to raise a Bootstrap alert after updates are downloaded, installed, and activated. It also implements periodic daily checks for updates to the service worker.

export function serviceWorkerRegistrationEnhancements(config, registration) {
    const AUTOMATIC_UPDATE_CHECK_INTERVAL = 24;   // expressed in hours

    // OBJECTIVE 1.) SETUP A CALLBACK FOR THE ACTIVATED EVENT.
    // see: https://developer.mozilla.org/en-US/docs/Web/API/ServiceWorkerGlobalScope/activate_event
    registration.addEventListener('activate', function(event) {
        // waiting for the 'activate' event to complete would be the 
        // same thing as listening for the non-existent 'activated' event  to fire.
        event.waitUntil(() => {
            if (config && config.onActivated) {
                config.onActivated(registration);
            }
        });
    });

    // OBJECTIVE 2.) SETUP PERIODIC UPDATE CHECKS.
    // periodically poll for updates to the service worker
    function checkUpdates(registration) {

        if (registration && registration.update) {
            registration.update();
            setTimeout(function() {  // queue up the next update check
                checkUpdates(registration);
            }, 1000 * 60 * 60 * AUTOMATIC_UPDATE_CHECK_INTERVAL);   
        }
    }

    // initiate periodic update checks.
    checkUpdates(registration);

}

Note: At this point we’ve fully implemented the modifications that necessary to create automated updates with periodic checks in the background. The remaining code samples are only necessary for rendering announcements of these activities to the browser.

App.js

BEFORE

import React from 'react';

function App() {
  return (
    <React.Fragment>
    <Head />
    <BrowserRouter>
        <Header />
        <Routes />
        <Footer />
    </BrowserRouter>
  </React.Fragment>
);
}

export default App;

AFTER. Note that I refactored the default App from a functional to a class component. This is necessary so that we can make use of the class component life cycle methods.

import React, { Component } from 'react';
import * as serviceWorkerRegistration from './serviceWorkerRegistration';

// 
// misc app imports ...
//

// UI stuff for service worker notifications
import AppUpdateAlert from './components/appUpdate/Component';

const UPDATE_AVAILABLE_MESSAGE = "New content is available and will be automatically installed momentarily.";
const SUCCESSFUL_INSTALL_MESSAGE = "This app has successfully updated itself in the background. Content is cached for offline use.";

class App extends Component {

  constructor(props) {
    super(props);

    this.state = {
      isSet: false,              // True once componentDidMount runs
      customClass: props.cls,    // Expects "online" or "offline"

      // service worker state management
      // -----------------------------------------------------------------------------------------
      updatedSW: null,             // The service worker that is waiting to be updated

      isSWUpdateAvailable: false,  // Set to True to trigger a Bootstrap alert.
                                   // Set from a Workbox callback 
                                   // after a new service worker has 
                                   // downloaded and if ready to install.

      wasSWInstalledSuccessfully: false    // Set to True to trigger a Bootstrap alert.
                                           // Set from a Workbox callback after
                                           // service worker was successfully installed.
      // -----------------------------------------------------------------------------------------
    };

    // Workbox and React component callbacks.
    // We want these bound to this class so that garbage collection
    // never eliminates them while a Workbox event handler might
    // call one of them.
    this.resetSWNotificationStates = this.resetSWNotificationStates.bind(this);
    this.onSWUpdateAvailable = this.onSWUpdateAvailable.bind(this);
    this.onSWInstallSuccess = this.onSWInstallSuccess.bind(this);

  }

  // -------- Workbox Service Worker event handlers and state management -------

  // Callback for our AppUpdateAlert component.
  resetSWNotificationStates() {

    // this covers the intended use case
    // of allowing a server worker update to proceed
    // automatically, once the user has been made aware
    // that the update exists, was downloaded in the background
    // and is ready to install.
    if (this.state.updatedSW && this.state.isSWUpdateAvailable) {
      this.state.updatedSW.postMessage({
        type: 'SKIP_WAITING'
      });
    }

    // reset the service worker states
    this.setState({ 
      updatedSW: null,
      isSWUpdateAvailable: false,
      wasSWInstalledSuccessfully: false
    });
  }

  // Workbox callback for "service worker update ready" event
  onSWUpdateAvailable(registration) {
    if (this.state.isSet && registration) {
      this.setState({
        updatedSW: registration.waiting,
        isSWUpdateAvailable: true,
        wasSWInstalledSuccessfully: false
      });
    }
  }

  // Workbox callback for "service worker installation success" event
  onSWInstallSuccess(registration) {
    if (this.state.isSet) {
      this.setState({ 
        updatedSW: registration,
        isSWUpdateAvailable: false,
        wasSWInstalledSuccessfully: true
      });
    } 
  }

  // ------------ React Component life cycle methods ------------
  componentDidMount() {

    this.resetSWNotificationStates();
    this.setState({
      isSet: true
    });

    // Note: I relocated this snippet from index.js
    // in order to add Workbox's two event handlers
    // for onUpdate and onSuccess.
    if (process.env.NODE_ENV === 'production') {
      serviceWorkerRegistration.register({ 
        onUpdate: this.onSWUpdateAvailable,
        onSuccess: this.onSWInstallSuccess,
        onActivated: this.onSWInstallSuccess   // a custom event that I added to Workbox
      });
    }
  }
  
  render() {

    // service worker app update alerts.
    function AppUpdateAlerts(props) {
      const parent = props.parent;
      
      return(
        <React.Fragment>
            {parent.state.isSet &&
              <React.Fragment>
                {parent.state.isSWUpdateAvailable && 
                        <AppUpdateAlert 
                          msg={UPDATE_AVAILABLE_MESSAGE} 
                          callback={parent.resetSWNotificationStates} /> 
                }
                {parent.state.wasSWInstalledSuccessfully && 
                        <AppUpdateAlert 
                          msg={SUCCESSFUL_INSTALL_MESSAGE} 
                          callback={parent.resetSWNotificationStates} /> 
                }
              </React.Fragment>
            }
        </React.Fragment>
      )
    }

    return ( 
      <React.Fragment>
        <Head />
        <BrowserRouter>
          <div className={"container-fluid p-0 " + this.state.customClass}>
            <Header />
            <AppUpdateAlerts parent={this}/>
            <Routes />
            <Footer />
          </div>
        </BrowserRouter>
      </React.Fragment>
    )
  }
}

export default App;

appUpdate React Component

This React component is used in App.js. It renders a Bootstrap alert to the top of the screen which automatically disappears after 5 seconds and then fires an optional callback.

import React from 'react'; 
import { Alert } from 'reactstrap';

import './styles.css';

const ALERT_VISIBILITY_SECONDS = 5.0;

class AppUpdateAlert extends React.Component {
  constructor(props) {
    super(props);

    this.state={
      visible : false,
      msg: props.msg
    }

    // Bind the callback so we can execute
    // it from anywhere inside this class, and
    // so that garbage collection knows to
    // leave it alone while we're running.
    if (props.callback) this.callback = props.callback.bind(this);

  }

  // make ourself `visible` so that the Bootstrap alert
  // renders to the screen. Also set a Timeout to automatically 
  // fire after X seconds to automatically disappear the
  // the alert as well as to execute the callback function.
  componentDidMount() {
    this.setState({
      visible:true
    }, ()=>{window.setTimeout(()=>{
        this.setState({
          visible:false
        });
        if (this.callback) this.callback();
      }, 1000 * ALERT_VISIBILITY_SECONDS);
    });

  }

  render() { 
    return(
      <React.Fragment>
        <div className="fixed-top m-0 p-5 text-right">
          <Alert isOpen={this.state.visible} fade={true} className="border border-light alert alert-warning text-center">
            {this.state.msg}
          </Alert>
        </div>
    </React.Fragment>
  );
  }
}


export default AppUpdateAlert;

I hope you found this helpful. Contributors are welcome. My contact information is on my web site. Please help me improve this article by leaving a comment below. Thank you!

The post Customizing Service Workers in ReactJS Progressive Web Apps with Google Workbox appeared first on Blog.

]]>
https://blog.lawrencemcdaniel.com/customizing-service-workers-in-reactjs-progressive-web-apps-with-google-workbox/feed/ 1
Developer’s Guide to Opaque Keys https://blog.lawrencemcdaniel.com/developers-guide-to-open-edx-course-structure-and-opaque-keys/?utm_source=rss&utm_medium=rss&utm_campaign=developers-guide-to-open-edx-course-structure-and-opaque-keys https://blog.lawrencemcdaniel.com/developers-guide-to-open-edx-course-structure-and-opaque-keys/#respond Mon, 18 Oct 2021 21:41:55 +0000 https://blog.lawrencemcdaniel.com/?p=4920 Open edX courseware leverages two powerful technologies for locating and interacting with course content, both of which were developed internally. Opaque Keys describe the location of course content both in terms of browser access via url as well as internally in the application source code. XBlocks generically describe individual units of course content such

The post Developer’s Guide to Opaque Keys appeared first on Blog.

]]>

Open edX courseware leverages two powerful technologies for locating and interacting with course content, both of which were developed internally. Opaque Keys describe the location of course content both in terms of browser access via url as well as internally in the application source code. XBlocks generically describe individual units of course content such as a Chapter, a Subsection or an individual homework problem or test question. Lets take a closer look at how these two intertwined technologies work.

Though the Open edX course structure and its object key system, Opaque Keys are each quite complex, you’ll see in “Code Sample I” that we can create and traverse a course outline in only four lines of code. With a cursory understanding of edx-platform’s in-place application api’s you’ll be able to do amazing things with little code.

Our discussion necessarily begins with Opaque Keys as these are the code objects that describe individual chunks of course content. By way of example, each of the following is a string representation of some kind of Opaque Key:

  • The slug, “course-v1:edX+DemoX+Demo_Course”, contained in the url value, “https://school-of-rock.edu/courses/course-v1:edX+DemoX+Demo_Course/” is a string representation of a CourseKey, which descends from OpaqueKey.

  • “block-v1:edX+DemoX+Demo_Course+type@vertical+block@vertical_98cf62510471” is a string representation of a BlockUsageLocator, which is also a descendent of OpaqueKey, albeit a different branch of the class hierarchy. Strings of this form also surface in the url structures of the LMS.

What are Opaque Keys?

Opaque Keys do the following:
1.) accurately describe any course object in terms of its structural location, which course run, which branch, and which version
2.) facilitate a map to-from the Open edX URL structures
3.) provide backward compatibility to older Open edX key and storage strategies

Opaque Keys are referred to generically in documentation as locators, keys, course keys, course id’s, asset keys, and usage keys. None of these is incorrect mind you. The three most common kinds of opaque keys are:

  • CourseLocator (formerly a CourseKey)
  • BlockUsageLocator (formerly a UsageKey and AssetKey)
  • DefinitonKey (reusable code)

A CourseLocator (formerly CourseKey) contains many BlockUsageLocators (AssetKeys and UsageKeys), and it knows about these. An AssetKey and a UsageKey each knows which CourseKey it is associated with. DefinitonKey’s are context independent. The class hierarchy diagram below helps to make sense of each of these statements.

Opaque Keys Class Hierarchy
Opaque Keys Persistence Models

Opaque keys are fast and powerful. The keys are referenced everywhere in the code base and therefore need to instantiate and execute performantly. Incidentally, there’s a longer-term plan for Opaque Keys which will make them, well, more opaque. While I’m not privy to the salient details of this vision, I am aware that bright minds are hard at work at a brighter future for Opaque Keys (pun intended).

Modern Opaque Keys provides a way to embed additional distinguishing content attributes such as course run, branch (ie “Draft” or “Published”), and version in the object itself. Naturally, attributes of this nature have implications to the user experience and the overall behavior of the platform itself. Much of this is encapsulated in the Opaque Keys project.

In Open edX source code, whether this be in edx-platform or any other code repository in the Open edX ecosystem, the string representations of Opaque Keys such as the course_id slug in a course url are consumed by the pip-installed opaque-keys Python package which provides powerful introspection and utility capabilities. As the name “Opaque” implies, there is more to these keys than meets the eye. The Opaque Keys project was originally part of edx-platform, but it was spun off into a separate repository in 2014.

I am aware of only these three documentation sources for Opaque Keys, each located in the edx-platform repo Wiki: 1.) “Opaque Keys (Locators)“, 2.) “Locators: Developer Notes“, and 3.) “Split: the versioning, structure saving DAO“. When reading these documents it behooves you to keep in mind that they were written to inform an already-knowledgeable audience about noteworthy changes to Opaque Keys at a point in time in the past. It’s therefore pretty easy to draw incorrect conclusions from these documents. Conversely, this article is intended to illustrate in practical terms how to make use of these technologies at present based on my own personal experience.

Something else to keep in mind is that Opaque Keys have changed significantly over the life of the project (10+ years), and the Open edX code base provides backward compatibility to all of its incarnations. In many cases this complicates analyzing and understanding the source code. This is a direct effect of the double-edge sword of maturing open-source projects. it’s oftentimes best to not delve too deeply into the sausage-making of Opaque Keys’ inter-workings. Fortunately, the Opaque Key classes have been stable for more than seven years, and thus you have little if no need to burden yourself with its knotty history.

Lastly, the Opaque Keys project straddles large and gradual tectonic shifts in Open edX’s persistence strategy referred to, respectively, as “Old Mongo” and “Split Mongo”. This itself merits an eventual blog post as the changes were substantial and thus also impacted the complexity of the lower-levels of the Open edX code base. Vastly paraphrasing, the modern Open edX platform leverages XModule, which identifies courses with “Locators” (a kind of Opaque Key), and reusable program code with “Locations” (also a kind of Opaque Key).

What is a Block?

From the extension engine blog, “XBlock is the SDK for the edX MOOC platform. It’s a framework that allows the open source software development community to extend and enhance edX to meet future needs and is written in Python2. XBlock is one of the aspects of edX that makes it such a powerful tool.” Except this is a little bit misleading in that all Open edX course content is created with XBlocks, also referred to more generically as “blocks”.

We could talk at length about blocks but we’ll save that for another blog post someday soon, hopefully.

All blocks are identifiable and addressable with a Locator (an Opaque Key class object instance), which in turn means that all blocks have a url, and, you can create an instance of any course block from XModule; even though blocks can vary significantly in terms of their UI, feature set and storage needs.

What is a Course Structure?

In practical terms, a Course Structure is the course outline of a given course run. It exists in a well-ordered tree-type organizational structure of various kinds of XBlocks. A Course Structure is built from blocks of discernible categories including “course”, “chapters” (Sections), “sequentials” (Subsections) and “verticals” (Units). Course Management Studio’s “Course Outline” view provides a technically accurate and complete representation of a Course Structure.

It turns out that every block in a course, including the block representing the course run itself, has a UsageKey. That’s because CourseLocator and BlockUsageLocator both inherit UsageKey. The UsageKey for the course itself is identifiable in code as the “root_block_usage_key”. All blocks individually can be traversed since all blocks are aware of their children. As per “Code Sample II” You can programmatically find any block’s parent vertical, sequential, chapter or course since blocks are aware of their parents,  these are generically inspectable categories of all blocks.

Persisting a data structure of this complexity requires both MySQL and Mongo, and potentially a file system or CDN as well. XModule exists to abstract these complexities from the basic CRUD of working with and organizing the blocks in a course run.

Instantiating and inspecting a Course Structure is actually a lot easier than it appears via inspection of the edx-platform code base, namely because you probably don’t need to worry about backwards compatibility and edge cases. The three code samples that follow provide examples of different ways to instantiate and traverse a course structure, and each of these only contains a few lines of salient code.

Code Sample I: Iterate a Course Structure

import string

from opaque_keys.edx.keys import CourseKey
from opaque_keys.edx.locator import BlockUsageLocator
from common.lib.xmodule.xmodule.modulestore.django import modulestore

# entry point to the block_structure api.
from openedx.core.djangoapps.content.block_structure.api import get_course_in_cache


def inspect_course_structure() -> None:
    """
    Create a block structure object for the demo course then
    traverse the blocks contained in the block structure.

    course_key:     opaque_keys.edx.keys.CourseKey
                    example course-v1:edX+DemoX+Demo_Course
    """
    def print_attribute(xblock: BlockUsageLocator, attr: string) -> None:
        print("XBlock {attr}: {val}".format(attr=xblock.display_name, val=xblock.__getattribute__(attr)))

    course_key = CourseKey().from_string("course-v1:edX+DemoX+Demo_Course")

    # get_course_in_cache() is part of the block_structure application api
    # which, when possible, is the preferred way to interact with course content.
    #
    # It is a higher order function implemented on top of the
    # block_structure.get_collected function that returns the block
    # structure in the cache for the given course_key.
    #
    # Returns:
    #   openedx.core.djangoapps.content.block_structure.block_structure.BlockStructureBlockData
    #   The collected block structure, starting at root_block_usage_key.
    collected_block_structure = get_course_in_cache(course_key)

    # topological_traversal() iterator returns all blocks
    # in the course structure, following the rules of a
    # topological tree traversal.
    # see https://en.wikipedia.org/wiki/Topological_sorting
    #
    # block_key is type opaque_keys.edx.locator.BlockUsageLocator
    for block_key in collected_block_structure.topological_traversal():

        # xblock is a BlockUsageLocator but it points to 
        # its fully-initialized XBlock content
        xblock = modulestore().get_item(block_key)

        # inspect some of the attributes of this xblock
        print_attribute(xblock=xblock, attr="display_name")
        print_attribute(xblock=xblock, attr="category")
        print_attribute(xblock=xblock, attr="location") # note: this is the block_key
        print_attribute(xblock=xblock, attr="start")
        print_attribute(xblock=xblock, attr="published_on")
        print_attribute(xblock=xblock, attr="published_by")
        print_attribute(xblock=xblock, attr="edited_on")
        print_attribute(xblock=xblock, attr="edited_by")

Code Sample II: Find a Parent Object Within the Course Structure

import string

from opaque_keys.edx.keys import UsageKey
from openedx.core.djangoapps.content.block_structure.block_structure import BlockStructureBlockData

from common.lib.xmodule.xmodule.modulestore.django import modulestore

def get_parent_location(category: string, block_key: UsageKey, block_structure: BlockStructureBlockData) -> UsageKey:
    """
    for the XBlock corresponding to the block_key, returns the UsageKey (location) for 
    its course, chapter, sequential, or vertical Block.

    Note: these equate to:
        course is the CourseSummary object
        chapter is a "Section" from Course Management Studio
        sequential is a "Subsection" from Course Management Studio
        vertical is a "Unit" from Course Management Studio

    Returns None if nothing is found.

    parameters:
    -------------
    category: xblock.fields.String or Python string
    block_key: opaque_keys.edx.keys.UsageKey, 
    block_structure: openedx.core.djangoapps.content.block_structure.block_structure.BlockStructureBlockData
    """

    xblock = modulestore().get_item(block_key)
    if xblock.category == category:
        return block_key

    for parent_key in block_structure.get_parents(block_key):
        parent_xblock = modulestore().get_item(parent_key)
        if parent_xblock.category == category:
            return parent_key

    return None

Code Sample III: Get the Transformed Blocks of a Course Structure

# python stuff
import string

# django stuff
from django.contrib.auth import get_user_model

# common stuff
from opaque_keys.edx.keys import CourseKey, UsageKey
from opaque_keys.edx.locator import BlockUsageLocator
from common.lib.xmodule.xmodule.modulestore.django import modulestore

# API entry point to the course_blocks app with top-level get_course_blocks function.
import lms.djangoapps.course_blocks.api as course_blocks_api

# XBlock transformers
# -------------------
from lms.djangoapps.course_blocks.transformers.access_denied_filter import AccessDeniedMessageFilterTransformer
from lms.djangoapps.course_blocks.transformers.hidden_content import HiddenContentTransformer
from lms.djangoapps.course_api.blocks.transformers.blocks_api import BlocksAPITransformer
from lms.djangoapps.course_api.blocks.transformers.milestones import MilestonesAndSpecialExamsTransformer
from openedx.core.djangoapps.content.block_structure.transformers import BlockStructureTransformers
from openedx.features.effort_estimation.api import EffortEstimationTransformer
# -------------------


def get_transformed_course_blocks():
    """
    Use the course_blocks api to create a list of the xblocks for 
    the demo course.

    Mutate the demo course's XBlocks using a list of
    commonly-used Transformers.
    """
    def print_attribute(xblock: BlockUsageLocator, attr: string) -> None:
        print("XBlock {attr}: {val}".format(attr=xblock.display_name, val=xblock.__getattribute__(attr)))

    # Note: refer to the class hierarchy above in this
    # blog post to better understand how these three
    # lines of code take us from a string value of a course key,
    # to a CourseKey, and finally to a UsageKey pointing to the 
    # root block of the course structure. 
    course_id = "course-v1:edX+DemoX+Demo_Course"

    # create an instance of opaque_keys.edx.keys.CourseKey
    course_key = CourseKey.from_string(course_id)

    # create an instance of opaque_keys.edx.keys.UsageKey
    # which is also the root usage key for the Course.
    course_usage_key = modulestore().make_course_usage_key(course_key)

    # any super user account w priveleges to see all content
    user = get_user_model().objects.filter(superuser=True, active=True).first()


    # BlockStructureTransformers encapsulates an ordered list of block
    # structure transformers. It uses the Transformer Registry
    # to collect their data.
    transformers = BlockStructureTransformers()

    # Transformers can add/remove xblock attribute fields, change 
    # the visibility settings of blocks and block attributes, as well as
    # change attribute values.
    #
    # We can add more transformers in order to continue to mutate
    # the xblock objects. 
    transformers += [
        # Default list of transformers for manipulating course block structures
        # based on the user's access to the course blocks.
        # these include:
        #    ContentLibraryTransformer(),
        #    ContentLibraryOrderTransformer(),
        #    StartDateTransformer(),
        #    ContentTypeGateTransformer(),
        #    UserPartitionTransformer(),
        #    VisibilityTransformer(),
        #    DateOverrideTransformer(user),
        course_blocks_api.get_course_block_access_transformers(user),

        # A transformer that handles both milestones and special (timed) exams.
        MilestonesAndSpecialExamsTransformer(),

        # A transformer that enforces the hide_after_due field on
        # blocks by removing children blocks from the block structure for
        # which the user does not have access. The hide_after_due
        # field on a block is percolated down to its descendants, so that
        # all blocks enforce the hidden content settings from their ancestors.
        HiddenContentTransformer(),

        # A transformer that removes any block from the course that has an
        # authorization_denial_reason or an authorization_denial_message.
        AccessDeniedMessageFilterTransformer(),

        # A transformer that adds effort estimation to the block tree.
        EffortEstimationTransformer(),

        # Umbrella transformer that contains all the transformers needed by the
        # Course Blocks API. includes the following transformers:
        #   StudentViewTransformer
        #   BlockCountsTransformer
        #   BlockDepthTransformer
        #   BlockNavigationTransformer
        #   ExtraFieldsTransformer
        BlocksAPITransformer()
    ]

    # create a BlockStructureBlockData object instance for 
    # the demo course.
    #
    # This is a transformed block structure,
    # starting at starting_block_usage_key, that has undergone the
    # transform methods for the given user and the course
    # associated with the block structure.
    #
    # If using the default transformers, the transformed block 
    # structure will be exactly equivalent to the blocks that
    # the given user has access.
    blocks = course_blocks_api.get_course_blocks(
        user,
        course_usage_key,
        transformers,
    )

    for xblock in blocks:
        # do stuff with the transformed xblocks
        print_attribute(xblock=xblock, attr="category")
        print_attribute(xblock=xblock, attr="display_name")
        print_attribute(xblock=xblock, attr="location")

I hope you found this helpful. Contributors are welcome. My contact information is on my web site. Please help me improve this article by leaving a comment below. Thank you!

The post Developer’s Guide to Opaque Keys appeared first on Blog.

]]>
https://blog.lawrencemcdaniel.com/developers-guide-to-open-edx-course-structure-and-opaque-keys/feed/ 0
Getting Started With Open edX Plugin Architecture https://blog.lawrencemcdaniel.com/getting-started-with-open-edx-plugin-architecture/?utm_source=rss&utm_medium=rss&utm_campaign=getting-started-with-open-edx-plugin-architecture https://blog.lawrencemcdaniel.com/getting-started-with-open-edx-plugin-architecture/#comments Fri, 01 Oct 2021 18:22:12 +0000 https://blog.lawrencemcdaniel.com/?p=4810 Since at least August-2020 it’s become possible to implement custom code for the Open edX platform without forking the edx-platform repository. Not only is it possible but it’s considered best practice to organize both your custom code as well as any platform modifications into separate pip-installable projects. This article, which includes practical code exercises,

The post Getting Started With Open edX Plugin Architecture appeared first on Blog.

]]>

Since at least August-2020 it’s become possible to implement custom code for the Open edX platform without forking the edx-platform repository. Not only is it possible but it’s considered best practice to organize both your custom code as well as any platform modifications into separate pip-installable projects. This article, which includes practical code exercises, will quickly get you up to speed on the right way to get started leveraging the Open edX plugin architecture.

Summary

In the spring of 2019 at the Open edX annual conference in San Diego, Nimisha Asthagiri, chief architect & senior director of engineering at edX, laid out a roadmap to transition the Open edX platform into a tighter more stable core surrounded by a vibrant ecosystem of plugins. Presentation slides are available at this link. At the time, no such plugins existed, but today this vision is fully realized. Additionally, edX has refactored some its legacy code as plugins, and these make for excellent guides on how to approach a multitude of coding situations.

We can organize Open edX’s refactored code into two distinct groups. First, in the left column below, some legacy apps which have been refactored as plugins, but (as of the Lilac release) they still reside in the edx-platform repository. And second, in the right column below, a set of Open edX plugins available for download on PyPi.

All of these apps use the Open edX plugin architecture and share a common configuration pattern which we’ll review more closely. Note that the app configuration is the same regardless of whether the source code resides in the edx-platform repository or if its been packaged and distributed via PyPi. The plugin management code itself is implemented partially within edx-platform.openedx.core.djangoapps.plugins and also in a separate pip-installed package named edx-django-utils. Reviewing this source code showed me most of what I needed to know about the Open edX plugin architecture, and its actually quite simple.

There are three technical distinctions between a traditional Django app versus an Open edX plugin. Respectively these regard:

  1. Registering your app (eg “plugin”) with Django
  2. Registering your app’s URL with Django, and
  3. Registering your app’s custom settings with Django.

Technically speaking, an Open edX plugin is a traditional Django app by every measure of the definition, with no exceptions whatsoever. A traditional Django app becomes an Open edX plugin by defining a dict named plugin_app in the apps.py module. This dict describes traditional Django configuration information about the settings and the URL to which the app is registered. There potentially are also some considerations in setup.py which are best discovered by reviewing the underlying source code of existing Open edX plugins distributed on PyPi.

You should consider packaging your code as a plugin, not least because this will encapsulate your custom code into a single repository, making it easier to find and maintain after deployment. Secondarily, it simplifies future Open edX platform upgrades, simply because upgrading a stock version of edx-platform is often a lot easier than upgrading a fork. Third, packaging your code as a plugin is a prerequisite if you have any aspirations of fully open-sourcing your project in the future.

Importantly, the edX engineering team has developed a set of internal tools for creating their own plugins, and you should definitely take advantage of these so that your projects are structured, organized and named according to Python, Django and Open edX community best practices.

I. Registering your plugin with Django

For a traditional Django app in Open edX (LMS or CMS) you would insert your dot-notated app definition into the Python/Django list INSTALLED_APPS. But with the Open edX plugin architecture your app is automagically registered. Cool, huh? It’s plugin manager introspects your installed package during service startup, and if it encounters a correctly structured dict inside your apps.py class definition named plugin_app then it will add your app to INSTALLED_APPS.

Since Open edX is a large platform, where in this list you place your app often matters, and you may have had previous experience having to tinker with your apps location in the list in order to avoid Python import errors. Though this likely has not changed, I amazingly have not run into a problem of this nature with Open edX plugins. I don’t know if the Open edX plugin manager is making smart deliberate choices about where in the list to insert my apps, or if I’m simply lucky. Hopefully like me, you won’t run into problems on this front.

II. Registering your plugin’s URL with Django

Similarly, for a traditional Django app in LMS/CMS you would set the url for an app by editing the project’s urls.py module. So for example, to add a url to the LMS you would traditionally edit the Python module edx-platform/lms/urls.py. But with the Open edX plugin architecture, you can register your url with Django at run-time from within the apps.py module of your Django app, as follows:

from django.apps import AppConfig
from edx_django_utils.plugins import PluginURLs
class MyAppConfig(AppConfig):
name = "my_app"
plugin_app = {
    PluginURLs.CONFIG: {
        # The three dict attributes literally equate to the following
        # lines of code being injected into edx-platform/lms/urls.py:
        #
        # import myapp.urls.py
        # url(r"^my-app/", include((urls, "my_app"), namespace="my_app")),
        #
        ProjectType.LMS: {
            PluginURLs.NAMESPACE: name,
            PluginURLs.REGEX: "^my-app/",
            PluginURLs.RELATIVE_PATH: "urls",
         }
    }
}

Note that the line PluginURLs.RELATIVE_PATH: "urls", describes the location of your urls.py module within your plugin, which typically would reside in your repository in the same root location as apps.py itself.

III. Registering your plugin’s settings with Django

Lastly, for a traditional Django app in LMS/CMS you would add your Django settings by directly modifying one of the settings modules. For example, if your app included a parameter MY_APP_DISCO_BALL = ['Orange', 'Blue', 'Green', 'Pink'] then you might add this to edx-platform/lms/settings/common.py.

In an Open edX plugin however, you define your settings in similarly-named modules in your project and then bind these modules at run-time to one of the Open edX settings modules like in this example:

from django.apps import AppConfig
from edx_django_utils.plugins import PluginSettings
from openedx.core.djangoapps.plugins.constants import ProjectType, SettingsType
class MyAppConfig(AppConfig):

plugin_app = {
    PluginURLs.CONFIG: {
        # This dict causes all constants defined in this settings/common.py and settings.production.py
        # to be injected into edx-platform/lms/envs/common.py and edx-platform/lms/envs/production.py
        # Refer to settings/common.py and settings.production.py for example implementation patterns.
        PluginSettings.CONFIG: {
            ProjectType.LMS: {
                SettingsType.PRODUCTION: {PluginSettings.RELATIVE_PATH: "settings.production"},
                SettingsType.COMMON: {PluginSettings.RELATIVE_PATH: "settings.common"},
            }
    }
}

Note that the fragment {PluginSettings.RELATIVE_PATH: "settings.production"} describes the location of a Python module named production.py located in ./settings/production.py relative to the location of your apps.py module.

IV. Putting it all together: a complete example

Combining these two code patterns yields the following.

import logging
from django.apps import AppConfig
from edx_django_utils.plugins import PluginSettings, PluginURLs
from openedx.core.djangoapps.plugins.constants import ProjectType, SettingsType

log = logging.getLogger(__name__)

class MyAppConfig(AppConfig):
    name = "my_app"
    label = "my_app"
    verbose_name = "My Open edX Plugin"

    # See: https://edx.readthedocs.io/projects/edx-django-utils/en/latest/edx_django_utils.plugins.html
    plugin_app = {
        PluginURLs.CONFIG: {
            ProjectType.LMS: {
                PluginURLs.NAMESPACE: name,
                PluginURLs.REGEX: "^my-app/",
                PluginURLs.RELATIVE_PATH: "urls",
            }
        },
        PluginSettings.CONFIG: {
            ProjectType.LMS: {
                SettingsType.PRODUCTION: {PluginSettings.RELATIVE_PATH: "settings.production"},
                SettingsType.COMMON: {PluginSettings.RELATIVE_PATH: "settings.common"},
            }
        },
    }

    def ready(self):
        log.debug("{label} is ready.".format(label=self.label))

V. Packaging your code with setup.py

A proper setup.py is best explained by way of example. Additionally, Cookiecutter, further explained in this section VI, should take care of most/all of the setup.py content. You should defer to Cookiecutter on any items that conflict with what you see below in this example.

import os
from setuptools import find_packages, setup

# allow setup.py to be run from any path
os.chdir(os.path.normpath(os.path.join(os.path.abspath(__file__), os.pardir)))

setup(
    name="my_app",
    version="0.1.0",
    packages=find_packages(),
    package_data={"": ["*.html"]},  # include any Mako templates found in this repo.
    include_package_data=True,
    license="Proprietary",
    description="Adds a sparking winter wonderland to normal Open edX environments",
    long_description="",
    author="Lawrence McDaniel",
    author_email="lpm0073@gmail.com",
    url="https://github.com/lpm0073/my-app",
    install_requires=[
        # don't add packages that are already required by open-edx.
        # this only increases the risk version conflicts in production.
        "django-environ==0.7.0",
        "django-extensions==3.1.3",
    ],
    zip_safe=False,
    keywords="Django edx",
    classifiers=[
        "Development Status :: 3 - Alpha",
        "Framework :: Django",
        "Framework :: Django :: 2.2",
        "Framework :: Django :: 3.0",
        "Framework :: Django :: 3.1",
        "Framework :: Django :: 3.2",
        "Intended Audience :: Developers",
        "License :: OSI Approved :: GNU Affero General Public License v3 or later (AGPLv3+)",
        "Natural Language :: English",
        "Programming Language :: Python :: 3",
        "Programming Language :: Python :: 3.8",
    ],
    entry_points={
        # IMPORTANT: ensure that this entry_points coincides with that of edx-platform
        #            and also that you are not introducing any name collisions.
        # https://github.com/edx/edx-platform/blob/master/setup.py#L88
        "lms.djangoapp": [
            "my_app = my_app.apps:MyAppConfig",
        ],
    },
    extras_require={
        "Django": ["Django>=2.2,<2.3"],
    },
)

VI. Creating your own plugin, the right way

The edX engineering team maintains a set of Cookiecutter templates, located here https://github.com/edx/edx-cookiecutters. They use these internally for creating new code projects. There are four different Cookiecutter templates in this repo and it is important that you choose the correct one for your project.

If you’re unfamiliar, Cookiecutter is a command-line utility that creates projects from cookiecutters (project templates), e.g. creating a Python package project from a Python package project template. Cookiecutter was created by Audrey Roy Greenfeld, co-author of the popular “Two Scoops of Django” series of books. So to be clear, Audrey Roy Greenfeld created the Cookiecutter command-line utility, and the edX team created and maintains a collection of Open edX Cookiecutter templates for starting new plugin projects.

You should also use these for starting your project. Installation and usage instructions are in the README of the repository. A couple words of caution about this repo. First, the installation instructions refer to a somewhat-nonstandard virtual environment manager named virtualenvwrapper. It is important that you use this virtual environment manager with this repo because it will ensure that the virtual environment installs the correct version of Python (Python 3.8 as of October 2021). Second, I had trouble installing virtualenvwrapper on my macMini M1; in part because I already have a couple other virtual environment managers installed. But I also got the impression that the authors are not mac guys, as they don’t offer a Homebrew installation method.

edX Cookiecutters

  • cookiecutter-django-app. In most cases this is what you should use. This scaffolds an empty project that is intended to be bundled as a pip-installable Open edX plugin. Oddly, the Cookiecutter template does not include any of the plugin_app code fragments for apps.py that I’ve demonstrated in this article, thus, you’ll have to copy/paste these yourself. However, it does take care of a lot of other minutia that is important and easy to overlook.
  • cookiecutter-xblock. Ditto, but for creating a pip-installable XBlock.

  • cookiecutter-python-library. It is unlikely that you would ever use this template. There are several good Cookiecutter templates available for creating a generic Python package, including from the Cookiecutter site itself. This template really only exists to help ensure that edX engineers conform to internal coding policies and standards when creating new Python packages (which probably is not often even for them).

  • cookiecutter-django-ida. It is also exceedingly unlikely that you would ever use this template which is for creating standalone web applications (eg “Independently Deployable Apps”). So for example, the Open edX LMS is an independently-deployed application, and so is Course Management Studio.

I hope you found this helpful. Contributors are welcome. My contact information is on my web site. Please help me improve this article by leaving a comment below. Thank you!

The post Getting Started With Open edX Plugin Architecture appeared first on Blog.

]]>
https://blog.lawrencemcdaniel.com/getting-started-with-open-edx-plugin-architecture/feed/ 1
Getting Started With Open edX API’s https://blog.lawrencemcdaniel.com/getting-started-with-open-edx-apis/?utm_source=rss&utm_medium=rss&utm_campaign=getting-started-with-open-edx-apis https://blog.lawrencemcdaniel.com/getting-started-with-open-edx-apis/#comments Mon, 27 Sep 2021 21:47:56 +0000 https://blog.lawrencemcdaniel.com/?p=4652 The Open edX platform code base includes a comprehensive set of REST API’s that cover everything from viewing the course catalogue, to managing users, to administering your e-commerce system, and more. This article provides step-by-step instructions on how to enable and interact with this powerful set of API’s that you can leverage to integrate

The post Getting Started With Open edX API’s appeared first on Blog.

]]>

The Open edX platform code base includes a comprehensive set of REST API’s that cover everything from viewing the course catalogue, to managing users, to administering your e-commerce system, and more. This article provides step-by-step instructions on how to enable and interact with this powerful set of API’s that you can leverage to integrate Open edX’s core functionality to your other systems as well as to develop your own custom web apps.

Summary

This article is a supplement to the official Open edX documentation, edX Course Catalog API User Guide, that really only covers a small subset of the complete set of REST api’s that Open edX implements. Additionally, the official documentation is written for the specific use case of someone gaining access to the course catalogue of the edx.org web site. For the avoidance of doubt, there are thousands of Open edX platform installations that are not actually affiliated with edx.org in any way but that run on the exact same open-source code base managed here.

This how-to article is aimed at helping developers and technical professionals to enable and expose the REST api’s on these independent Open edX installations. More specifically, it is intended to get the built-in api’s working on your Open edX installations under two distinct schemes:

  1. authenticated as an Open edX Django superuser, so that you can view REST api output from any url end point using an http client such as Postman, curl, or a  web browser.
  2. a production environment of any third party platform in which you intend to use a secret key to manage access to the Open edX api’s.

These instructions are written to work with both the native installation method as well as the Tutor 1-Click installation method.

The Open edX platform implements nearly 300 different URL end points covering most aspects of its core functionality.

  • Course Bookmarks api

  • Course Completion Certificates api

  • Cohorts api

  • E-Commerce api

  • Course Completion api

  • Courseware Management api

  • Course Credit api

  • Course enrollments, bulk enrollments, and program enrollments

  • Course Grades API

  • Demographic api

  • Course Enrollment Discounts api

  • Discussion Forum api

  • Proctoring api

  • Entitlements api

  • Platform Experiments (A/B split tests) api

  • LTI Consumer api

  • Mobile app api

  • Organizations Management api

  • User Management and Profile Image api

  • Team & Staff api

  • Third Party Authentication api

  • XBlock api

These are the production api’s that are also used on edx.org to support the stringent systems integration requirements of its hundreds of large affiliate educational institutions. These api’s are also leveraged extensively within the Open edX code base for inter-application communications and operations across its many components. If you are planning to integrate a third party system to Open edX for any reason, or if you are building a web app that will interact with the Open edX platform then these are the api’s that you should use.

All of these API’s are secure, well-written, well-documented, and performant. The api’s are independently version stamped, which will protect you from risks of future deprecations. The underlying source code for these api’s is objectively well-tested as measured by Coverage.

The api’s are implemented using a consistent strategy that is widely accepted as best practice in the Python/Django developer community. The code to create each API relies on the Django REST Framework, an excellent independent open source project. The API’s are documented with Swagger for URL discovery as well as with Sphinx for the source code itself. You can leverage the Swagger documentation to learn what URL end points are published and what parameters are required for each.

Also if you’re interested, with modest effort you can explore the Open edX code base to learn more about the technical specifics of how the output for various URL end points is generated. There’s a coding exercise at the bottom of this article that you can use to get started.

I. Discover what api’s are available

From the LMS of your own Open edX installation you can view a nicely organized, complete list of api url’s at the end point /api-docs/. So for example, here is the api-docs page for https://mooc.academiacentral.org/api-docs/. This page is published using Swagger. It is an interactive page that not only provides extensive information about each api url end point but also provides you with a way to authenticate and then to interact with the end points.

Swagger introspects Python source code and the Django REST Framework to generate a well-organized and intuitive web page representing all of the REST api’s in the code base. Later in this article we’ll review more of the features that are built-in to this page.

II. LMS application configuration settings

You can skip this step if you’re running a Tutor build of Open edX. For native builds you need to verify the following LMS application configuration settings, which are located in /edx/etc/lms.yml.

# You should edit the following default settings
#---------------------------------------------------
COURSE_CATALOG_API_URL: http://localhost:8008/api/v1
COURSE_CATALOG_URL_ROOT: http://localhost:8008
COURSE_CATALOG_VISIBILITY_PERMISSION: see_exists

# Here are the same settings, modified to work for mooc.academiacentral.org
#---------------------------------------------------
COURSE_CATALOG_API_URL: http://mooc.academiacentral.org/api/v1
COURSE_CATALOG_URL_ROOT: http://mooc.academiacentral.org
COURSE_CATALOG_VISIBILITY_PERMISSION: see_exists

Note that the default setting for COURSE_CATALOG_VISIBILITY_PERMISSION is fine in most cases.

III. Django admin console configuration

By default, all of the api’s are available using Basic Authentication with a Django user with superuser status. Thus, if you login to your Open edX LMS as a Django superuser then you should be able to connect and interact with all of the URL end points that you see on the /api-docs/ page. The instructions that follow are only necessary if, in addition to this basic level of api access, you also want to provide more controlled access to some of these end points using an api key. In this case you’ll need to do some minor additional configuration of your Open edX installation from the Django admin console located at the end point /admin.

Open edx api Django admin console api access
Open edX api access enable

IV. API credentials work flow

The Django admin console setting from step III above will expose the url end point/api-admin/. This url end point behaves like a single-page application, thus, you’ll see different page content depending the approval status of your api credentials request. Initially, you’ll see an API Access Request form like the following.

Completing the access request form and clicking the “Request API Access” button will advance you to a different page that looks like the following.

You’ll now need to return to the Django admin console to manually approve the api access request that you just created in the prior step.

If you’ve enabled the SMTP email service on your Open edX installation then approving the api access request will trigger an email to be sent to the email address assigned to your Django user account.

You might notice that the text body of the email above includes references to the url https://example.com/. This url appears rather than https://mooc.academiacentral.org because in reality the Open edX api’s are not configured on this platform to allow public access. That is, step II above has not actually been implemented on this Open edX installation.

Approving the api access request will also change the state of the url end point /api-admin/ to the following, where you’re now able to actually generate a client key and secret that you can use in http requests to the api end points.

V. Test connectivity from the Swagger page

With all of the administrative access formalities out of the way, we can now focus our complete attention on what these api’s actually do. We’ll start by returning to your Open edX’s Swagger page located at /api-docs/. From this page we’ll authenticate and then test an api end point that returns a json list of your courses.

Note that this page only uses Basic Authentication, regardless of whether you followed the steps to create an api key/secret pair. You should provide the username and password of a Django user with superuser access.

The two screen shots below indicate that I am successfully authenticated for purposes of interacting with any of the api end points available from the Swagger page. A word of caution: you should make extra certain that you are authenticated before delving further into the api end points themselves. The swagger UI for each end point will not provide much in the way of useful feedback in the event that you attempt to access any of the url end points without a valid authentication token in your http requests.

VI. Interact with an api end point using Postman

Postman is a popular desktop application that is the gold standard for working with REST api’s. I highly recommend it because of its great UI and its ease of use. The key difference between using Postman versus any other alternative is that Postman provides you with an easy way to see all elements of http requests and responses.

Following are, respectively, the fully-formatted (prettified) http response body, the http header values, and the cookies for an http request to the course list api end point: https://mooc.academiacentral.org/api/courses/v1/courses/

VII. Interact with an api end point using the Swagger page

The Swagger page is a one-size-fits-all solution for discovery and crude interaction with api end points. The real strength of the Swagger page is that it provides very detailed meta information about api end points, including the data definitions of each attribute contained in the json response object. This is an invaluable resource during early stages of your integration design.

While it’s technically possible to do everything from Swagger, including testing your http requests, I generally only use it as a documentation resource because the user experience in Postman is so much better for testing.

An example of the limitations of Swagger for testing purposes: the site that I’m using for the screen shots for this article, mooc.academiacentral.org, contains a 301 redirect from http to https which in point of fact is a common web hosting practice. Nonetheless, the 301 http response prevents Swagger from returning correct results for even the simplest url end points. Worse still, it doesn’t provide any meaningful error response that would help you to understand why you didn’t get results. Postman, and even a simple web browser for that matter, deal with this same situation as you’d expect.

VIII. Interact with an api end point on the Ubuntu command line

In some rare and quite isolated circumstances I (very occasionally) will use curl. curl is a Linux command-line tool that means “C url”, as in, “See url”. The only case where this ever makes any practical sense is for url end points that do not require any authentication, and, I intend to copy/paste the http response result back into the command line for something else. Other than that, I avoid using curl.

IX. Retrieve an api end point using a Browser

Last but not least, you can always use your web browser if you only need to test a “get” request. I’m using Chrome with a json-prettify extension for this screen shot. I prefer a browser solution like this when I intend to save the results to a json file.

X. Hacking Open edX api’s

The api’s in Open edX are implemented with Django REST Framework (DRF). The edX engineers have an excellent understanding of this framework and the edx-platform repo provides many fine textbook examples of the right way to maximize the many benefits of this technology. With a rudimentary understanding of Django and the DRF You can find and explore the source code behind these apis.

As a working example, let’s search the code base to locate the source that generates the json output of the api url end point https://mooc.academiacentral.org/api/courses/v1/courses/ that we’ve referenced throughout this article. You can follow these steps by cloning the edx-platform repository from Github and opening it with a code editor like VS Code.

Since our only known entry point is a url, to locate the underlying code block I’m going to parse and spelunker the slugs of the url, /api/courses/v1/courses/. Based on generally-accepted practices for Django projects I know to begin my search in /edx-platform/lms/urls.py whereupon I am able to find the following regex definition for the first two slugs

The include() reference include('lms.djangoapps.course_api.urls') in the url() definition tells me that the next slug or slugs are located in edx-platform/lms/djangoapps/course_api/urls.py whereupon I am able to find the remaining two slugs v1/courses

From this url() definition I can see that it references a Django view class named CourseListView that, via inspection of the import statement from .views import CourseDetailView, CourseIdListView, CourseListView at the top of the Python module, I can see comes from the views module of the same Django app.

From inspection of the class definition for CourseListView in this module I’m able to verify that this class is in fact derived from the DRF base class ListAPIView. As is commonly the case with the edx-platform repo, the source code of this class contains copious documentation in the docstring explaining its common use case, an example request, the structure of the request and response. And finally, within this class definition I find the def get_queryset() definition containing the underlying source code for the api end point.

For your convenience I’ve pasted the complete Python class definition here.

@view_auth_classes(is_authenticated=False)
class CourseListView(DeveloperErrorViewMixin, ListAPIView):
    &quot;&quot;&quot;
    **Use Cases**

        Request information on all courses visible to the specified user.

    **Example Requests**

        GET /api/courses/v1/courses/

    **Response Values**

        Body comprises a list of objects as returned by `CourseDetailView`.

    **Parameters**

        search_term (optional):
            Search term to filter courses (used by ElasticSearch).

        username (optional):
            The username of the specified user whose visible courses we
            want to see. The username is not required only if the API is
            requested by an Anonymous user.

        org (optional):
            If specified, visible `CourseOverview` objects are filtered
            such that only those belonging to the organization with the
            provided org code (e.g., &quot;HarvardX&quot;) are returned.
            Case-insensitive.

    **Returns**

        * 200 on success, with a list of course discovery objects as returned
          by `CourseDetailView`.
        * 400 if an invalid parameter was sent or the username was not provided
          for an authenticated request.
        * 403 if a user who does not have permission to masquerade as
          another user specifies a username other than their own.
        * 404 if the specified user does not exist, or the requesting user does
          not have permission to view their courses.

        Example response:

            [
              {
                &quot;blocks_url&quot;: &quot;/api/courses/v1/blocks/?course_id=edX%2Fexample%2F2012_Fall&quot;,
                &quot;media&quot;: {
                  &quot;course_image&quot;: {
                    &quot;uri&quot;: &quot;/c4x/edX/example/asset/just_a_test.jpg&quot;,
                    &quot;name&quot;: &quot;Course Image&quot;
                  }
                },
                &quot;description&quot;: &quot;An example course.&quot;,
                &quot;end&quot;: &quot;2015-09-19T18:00:00Z&quot;,
                &quot;enrollment_end&quot;: &quot;2015-07-15T00:00:00Z&quot;,
                &quot;enrollment_start&quot;: &quot;2015-06-15T00:00:00Z&quot;,
                &quot;course_id&quot;: &quot;edX/example/2012_Fall&quot;,
                &quot;name&quot;: &quot;Example Course&quot;,
                &quot;number&quot;: &quot;example&quot;,
                &quot;org&quot;: &quot;edX&quot;,
                &quot;start&quot;: &quot;2015-07-17T12:00:00Z&quot;,
                &quot;start_display&quot;: &quot;July 17, 2015&quot;,
                &quot;start_type&quot;: &quot;timestamp&quot;
              }
            ]
    &quot;&quot;&quot;
    class CourseListPageNumberPagination(LazyPageNumberPagination):
        max_page_size = 100

    pagination_class = CourseListPageNumberPagination
    serializer_class = CourseSerializer
    throttle_classes = (CourseListUserThrottle,)

    def get_queryset(self):
        &quot;&quot;&quot;
        Yield courses visible to the user.
        &quot;&quot;&quot;
        form = CourseListGetForm(self.request.query_params, initial={'requesting_user': self.request.user})
        if not form.is_valid():
            raise ValidationError(form.errors)

        return list_courses(
            self.request,
            form.cleaned_data['username'],
            org=form.cleaned_data['org'],
            filter_=form.cleaned_data['filter_'],
            search_term=form.cleaned_data['search_term']
        )

I hope you found this helpful. Contributors are welcome. My contact information is on my web site. Please help me improve this article by leaving a comment below. Thank you!

The post Getting Started With Open edX API’s appeared first on Blog.

]]>
https://blog.lawrencemcdaniel.com/getting-started-with-open-edx-apis/feed/ 2
Create a REST API With WordPress https://blog.lawrencemcdaniel.com/create-a-rest-api-with-wordpress/?utm_source=rss&utm_medium=rss&utm_campaign=create-a-rest-api-with-wordpress https://blog.lawrencemcdaniel.com/create-a-rest-api-with-wordpress/#comments Mon, 30 Aug 2021 20:59:04 +0000 https://blog.lawrencemcdaniel.com/?p=4550 WordPress is a fast and highly effective platform for hosting a REST API for a variety of use cases, especially if your API endpoints serve content like images or filterable, categorized blocks of text. This step-by-step guide demonstrates how to implement a production-ready REST API in only a few hours, and with no custom

The post Create a REST API With WordPress appeared first on Blog.

]]>

WordPress is a fast and highly effective platform for hosting a REST API for a variety of use cases, especially if your API endpoints serve content like images or filterable, categorized blocks of text. This step-by-step guide demonstrates how to implement a production-ready REST API in only a few hours, and with no custom coding required.

Summary

Why build a REST API with WordPress?” great question! Even though my own vanity and technology stack preferences tend towards, well, any other option besides WordPress, I nonetheless recognize that it is a pragmatic and robust solution for many use cases. A WordPress REST API implementation is best explained by way of example, and as it happens, I implemented a REST API for my ReactJS-based personal web site in mid-summer of 2020 and I’m quite satisfied with it in terms of usability, customizability, performance, reliability, cost, and maintainability. Incidentally, I’ve since come to realize that a WordPress-based REST API is also horizontally scalable, even more so than a typical WordPress web site.

The WordPress build itself is surprisingly simple. No custom PHP programming is required. You can refer to this blog article, “Step by Step Guide to Setup WordPress on Amazon EC2 (AWS) Linux Instance“, for step-by-step instructions on creating a basic WordPress site on Ubuntu Linux. Since version 5.0, WordPress includes a full-featured REST API with a complete set of endpoints for managing WordPress posts, pages and media. However, you can greatly enhance these built-in capabilities with a pair of free WordPress plugins: Advanced Custom Fields, (or Advanced Custom Fields PRO if you use an Avada Theme) and ACF to REST API. This in a nutshell is the basic installation that is needed in order to implement full-featured REST API basic entity-relationships, but in the interest of being thorough, following is the complete set of WordPress Plugins that I am using on the site.

You can inspect the following endpoints from my production REST API to get an idea of the breadth and sophistication that is achievable from a REST API implemented with nothing more than WordPress and these two plugins:

Why I Built My REST API With WordPress

Prior to deciding on WordPress I evaluated several alternatives including Django REST Framework, NodeJS, AWS API Gateway, and Ruby on Rails. Each of these alternatives are fine technologies, and in point of fact I do spend considerable time working with some of them. Importantly, WordPress offers excellent infrastructure for basic text and media content management simply by virtue of its maturity, expansive ecosystem and its rich catalogue of mostly-free plugins. As of Aug-2021 around 455 million web sites on the Internet were running on WordPress. This was a major consideration in my case due to the extensive use of still images on my personal web site as well as the simple structures of the site’s text content.

Conversely, the achilles’ heel with most of these alternatives regards their comparative shortcomings related to basic content management. Secondarily, I think all of these alternatives are comparatively more complicated to use for modeling a basic API endpoint. Thus, it would have taken me more time and effort to achieve, at best, the same end result with any of these competing alternatives. Lastly, and speaking from first-hand experience, managing any of these other technologies in a production environment where you need to consider data backups, scaling, security and code updates is more complicated and usually more maintenance intensive.

A more thorough, prioritized explanation of my selection criteria follows.

1. Superior Image content management Tools

I wanted a REST API solution that seamlessly integrates with a professional-quality image content management system. Simple and effective image management is extremely important to me. My personal web site contains more than 500 images, and all of these need to be individually compressed, optimized and cropped based on how they are presented on each page which frankly, is pretty arbitrary. Site visitors’ attention instinctively flows towards the site’s images, so these need to be quickly and efficiently delivered to the browser and well-presented, which is more technically difficult and tedious to achieve than it might seem. Furthermore, image optimization itself is notoriously difficult to do well, and I would lose dozens of hours a month on this one mundane task if not for the professional quality image tools that are available in the WordPress ecosystem.

I quickly upload raw original images which are then automatically compressed, cropped, optimized and then transferred to a CDN hosted by AWS, all via a seamless automated work flow pipeline. Each of these tasks individually is hard to do well, but with WordPress and a $100 dollar investment I had all of this working in less than three hours. The end results are quite satisfying: all of the images that appear on my personal web site are perfectly sized and optimized; and with zero effort on my part beyond simply uploading the original images.

All of this is accomplished with two premium plugins: Imagify to compress and optimize the images, and WP Offload Media to relocate optimized images to AWS Cloudfront CDN.

Imagify reduced the aggregate original image content size by nearly 70%, a reduction of more than 100mb in my case.

One the driving reasons that I purchased WP Offload Content is because in addition to automatically syncing the contents of my image media library to AWS Cloudfront, it also automatically rewrites URL’s in api responses.

Note that Imagify creates several versions of each image, and WP Offload Media re-writes URLs for each of these. This combination of functionality allows you the luxury of deferring decisions on which image to present in the browser until its actually needed, which in turn enables you to select the smallest/lightest image in real-time based on the user’s window size and their browser’s cache contents at that exact moment.

2. Ease of implementation

There are a half dozen api end points for my personal web site, and each of these are pretty simple. I’ve never found anything as easy to implement as WordPress for simple backend jobs like this, even though I actually work extensively with Django REST Framework in my day job. I built the REST API for my personal web site and went to production in a single work day. This also includes setting up DNS, site security, https certificates, and offsite site data backup.

3. Ease of Use

I only have two use cases for managing site content:

  • Uploading new images such as client logos or technology logos. In only a couple of minutes I’m able to upload and categorize an original image to the REST API site, and then it automatically appears in my ReactJS web site.

  • Adding new text entries such as a LinkedIn recommendation or a portfolio case study. Also in only a couple of minutes I can create and categorize a new WordPress post containing the text and attribute data of new content for my ReactJS site, and then it automatically appears.

This is important to me because after having invested several weeks developing the ReactJS /Redux site, going forward I want to minimize my time requirements for keeping the site updated with new portfolio entries, technical skills, client logos, LinkedIn recommendations and so on. Additionally, and unlike the site’s front-end, with the REST API I don’t spend any time nor energy updating or maintaining source code.

4. Maintainability

By the same measure, I also want to minimize my time requirements for keeping this site up, running, secure and healthy. This is easily achieved with a 3rd party backup solution (Updraft) along with security tools (Wordfence). Additionally, I minimize the number of WordPress plugins installed on the site, in the interest of keeping the environment as simple as possible. Maintenance of this site takes me around 10 minutes a month.

5. Performance

Performance? WordPress? Really???? Well, as a matter of fact yes. It’s VERY difficult to beat the observed performance realized via image optimization, CDN delivery, and api page caching that you easily achieve with this simple combination of WordPress plugins. Additionally in my case, the cache hit ratio for this site is effectively 100% since the content of this api is mostly static. API requests are therefore mostly served in-memory by the site’s caching engine (WP Rocket), achieving response latencies of between 50ms and 100ms per request. But don’t simply take my word for it, you can test any of the end points on this site with a page performance service like Solar Winds’ Pingdom.

6. Operating Cost

I’m sensitive to running costs since this REST API supports my personal web site which obviously is not a revenue-generating venture. This site consumes negligible resources and runs effectively for free, as it is colocated among several other sites running on the same shared infrastructure.

I hope you found this helpful. Please help me improve this article by leaving a comment below. Thank you!

The post Create a REST API With WordPress appeared first on Blog.

]]>
https://blog.lawrencemcdaniel.com/create-a-rest-api-with-wordpress/feed/ 2