Loading...

Follow Rob Allen's DevNotes on Feedspot

Continue with Google
Continue with Facebook
or

Valid

rst2pdf is a Python 2 application that we're making compatible with Python 3. When developing Python applications, I've found it useful to be able to switch python versions easily and also set up clean environments to work in. To do this, I currently use pyenv.

This is how I set it up:

Install Pyenv

On my Mac, I install pyenv & its sister project pyenv-virtualenv with Homebrew:

$ brew install readline xz
$ brew install pyenv pyenv-virtualenv

You then need to add this to .bashrc:

eval "$(pyenv init -)"
eval "$(pyenv virtualenv-init -)"

On Ubuntu, use the pyenv-installer:

$ sudo apt-get install -y make build-essential libssl-dev zlib1g-dev libbz2-dev \
  libreadline-dev libsqlite3-dev wget curl llvm libncurses5-dev libncursesw5-dev \
  xz-utils tk-dev libffi-dev liblzma-dev python-openssl git
$ curl https://pyenv.run | bash

You then need to add this to .bashrc:

$ export PATH="$HOME/.pyenv/bin:$PATH"
$ eval "$(pyenv init -)"
$ eval "$(pyenv virtualenv-init -)"

After restarting the terminal, pyenv will be available to you.

Install Python using Pyenv

Pyenv's main job is to install different python versions into their own environments and allow you to swap between them, You can even set it up so that it will try multiple versions in order when you run a Python application which can be quite useful.

To list the available versions: python install -l. I install the latest versions of the Pythons that I'm interested in:

$ pyenv install 2.7.16
$ pyenv install 3.7.4

Use pyenv versions to see what's installed.

We can now set a given version as our system-wide python with pyenv global, however, it's much more useful to set up isolated environments and use them.

Create day-to-day environments

Separate environments, known as virtualenvs or venvs, isolate an app and its dependencies from another one. In principle, you could have a separate environment for each application, but in practice, I've found that for my day-to-day apps, I can use the same environment for all apps for a given major Python version. I calls these environments apps2 and apps3 and put all my day-to-day apps and their dependencies in here, leaving the original Python installations clean for creating further environments for development work.

We create a new environment using the pyenv virtualenv command:

$ pyenv virtualenv 2.7.16 apps2
$ pyenv virtualenv 3.7.4 apps3

We set these are my system-wide defaults using pyenv global:

$ pyenv global apps3 apps2

This tells pyenv to look for a given app in the apps3 environment first and if it's not there, look in apps2. We can now install python apps as required.

Firstly, the released version of rst2pdf:

$ pip2 install -U rst2pdf

Then, other interesting python scripts, such as the AWS CLI:

$ pip install -U awscli
$ pip install -U aws-sam-cli

(With this set-up, pip is a synonym for pip3 and is one less character to type!)

Create development environments and activate locally

When I'm developing rst2pdf, I want separate environments so that I can change the dependencies without breaking my day-to-day version of rst2pdf:

$ pyenv virtualenv 2.7.16 rst2pdf-py2
$ pyenv virtualenv 3.7.4 rst2pdf-py3

To get use one of these environments when I'm developing rst2pdf, I use pyenv local within the rst2pdf source directory to select that environment automatically:

$ cd ~/dev/rst2pdf
$ pyenv local rst2pdf-py3

Now, I'm using a new clean environment, I can set it up for development:

$ pip install nose coverage
$ pip install -r requirements.txt
$ pip install -e .

I repeat this for rst2pdf-py2 and it's now easy to develop rst2pdf in both Python 3 and 2 without impacting my ability to create presentations and documents from rST using the released version of rst2pdf.

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

I really like the 1Password password manager and recently switched to using the subscription based account. This allows access to my passwords via the web so, as you can imagine, I have a very strong 35 character master password set.

On my Mac, I use the 1Password app and that requires me to enter my master password reasonably frequently, so this very long password is not so desirable here. I'm comfortable that a 12 character password is enough locally as you have to log in to the Mac first.

To persuade 1Password for Mac to allow you to use a different master password to the 1password.com master password, you need a standalone vault that is created before you add your 1password.com account. Getting this right took a little bit of playing with and reading around, so I've written down the steps for when I next set it up.

The steps
  1. Log into your 1password.com account on the web and download the Mac app.
  2. Install the Mac app and start it.
  3. Choose to create a standalone vault. This is hidden under a more options section.
  4. Enter a master password for the standalone vault. This will be your shorter Mac app-only password.
  5. Add your 1password.com account to the Mac app. This requires your long 1password.com master password.
  6. Ensure that you only use the 1password.com vaults. In Preferences -> Vaults:
    • Ensure that the Vault for Saving is set to your vault in the 1password.com account
    • Uncheck your "Primary" standalone vault

That's it. We can now log into the 1Password Mac app using a shorter password than our very secure master 1password.com password.

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Azure App Service is a way to host your web application in a container without having to think about the server. It's the same PaaS concept as AWS Elastic Beanstalk and supports all the main web programming languages. It also supports Windows and Linux OS containers.

I have a client that is moving an on-premises PHP application to App Service and so have been looking at what I needed to do to deploy it there.

To get started, Azure have a couple of great tutorials for Linux and Windows with which I got a basic app running in no time, so I could turn my attention to the specifics I cared about.

Database drivers

My app uses SQL Server, so I was pleased to see that both the Windows and Linux PHP containers have the sqlsrv extension installed. However, there's a bug today where the Linux container is missing the ODBC driver, so SQL Server support doesn't actually work on the Linux container. This should be fixed in July apparently.

Both containers also support PostgreSQL and MySQL, so the databases I care for are covered.

Rewriting all URLs to index.php in the public directory

In common with many PHP applications, I have a public directory where the files to be served by the web server are kept and my PHP and other source files are elsewhere. I also use rewriting to map all URLs to my public/index.php file.

The way we handle this depends on the container.

Windows container

On the Windows container, we need to change the path mapping so that the public directory is mapped to the /. This can be done in the web portal in Configuration -> Path mappings where by default the / virtual path is mapped to site\wwwroot physical path. Edit this so that it points to site\wwwroot\public. I'm not a fan of web-based tooling for things like this as it's error-prone. Azure provides a command line tool, so we can make the same path mapping change with:

$ az resource update --name web --resource-group {Resource Group} \
  --namespace Microsoft.Web --resource-type config --parent sites/{App Service} \
  --api-version 2015-06-01 \
  --set properties.virtualApplications[0].physicalPath="site\wwwroot\public"

(Change {Resource Group} and {App Service} to the correct names for your installation.)

Now that we're serving from /public, we can now add the rewrite rule. With IIS, we use a web.config file:

/public/web.config:

<?xml version="1.0" encoding="UTF-8"?>
<configuration>
  <system.webServer>
    <rewrite>
      <rules>
        <rule name="Pretty URLs to index.php" stopProcessing="true">
          <match url="^(.*)


quot; />
<conditions logicalGrouping="MatchAll">
<add input="{REQUEST_FILENAME}" matchType="IsFile" negate="true" />
<add input="{REQUEST_FILENAME}" matchType="IsDirectory" negate="true" />
</conditions>
<action type="Rewrite" url="index.php" />
</rule>
</rules>
</rewrite>
</system.webServer>
</configuration>

Linux container

On Linux, the container runs Apache, so we can rewrite the URL to public/index.php with a single .htaccess file in our root directory:

/.htaccess:

RewriteEngine On
RewriteBase /

# Rewrite static files that live in public/
RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{REQUEST_FILENAME} !-d
RewriteRule ^(.*)\.(woff|ttf|svg|js|ico|gif|jpg|png|css|htc|xml|txt|pdf)$ /public/$1.$2 [L,NC]

# Redirect all other URLs to public/index.php
RewriteRule ^((?!public/).*)$ public/index.php/$1 [L,QSA]

Two rewrite rules are required: firstly we rewrite static files so that the continue to be served by Apache and then we rewrite everything else to index.php.

Environment variables

Environment variables are a common way to set per-installation configuration and this is done vie the portal or the command line:

$ az webapp config appsettings set --name {App Service} \
    --resource-group {Resource Group} \
    --settings \
	ENV_VAR_1="some value" \
	ENV_VAR_2="some other value"

Summary

Running a PHP application on Azure App Engine with either the Windows or Linux container is much the same as on any other hosting and seems to work well.

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

I've been trying to solve an issue with the rst2pdf Sphinx extension which allows you to use rst2pdf to create a PDF of your Sphinx documentation rather than using Sphinx's default pdfTeX builder.

In order to understand what was happening, I really wanted to inspect variable at certain stages of the process, which is easiest with a step debugger. As I already use PhpStorm for step debugging PHP, I fired up PyCharm and worked out how to do it there.

Set up a Run/Debug configuration

PyCharm's debugger works with Run/Debug configurations so that you can set the script to run, parameters, version of Python you want to use, etc. This is accessed via the Run -> Edit Configurations… menu item. It's also available as a control on the right of the tool bar next to the "play" button.

To create a new configuration, we press the + button in the tool bar of the Run/Debug Configurations window. This gives us a list of possible templates and so I picked Python.

Each configuration needs a name, so I called mine sphinx-issue168 as that's test is the one that shows the problem. We can then turn to the key configuration parameters:

Script path This is the python script that PyCharm will run when you start debugging. As I'm debugging a Sphinx extension, I need the full path to sphinx-build which is in the bin directory of my virtualenv, so for me this is /Users/rob/.pyenv/versions/3.7.3/envs/rst2pdf-3/bin/sphinx-build. Parameters The command line parameters to pass to the script. For sphinx-build, I looked up what was passed in the Makefile and used that: -E -b pdf -d _build/doctrees . _build/pdf

The key parameters are -E to force it to rebuild even if nothing's changed and -b pdf so that it uses the rst2pdf builder. Working directory This is the directory that the script is run from. Although sphinx-build in on the path, the input files and output directory are relative, so I set this to the directory that the Makefile is in: /Users/rob/Projects/python/rst2pdf/rst2pdf/tests/input/sphinx-issue168 – again, a fully qualified directory as I'm not sure where PyCharm starts from Emulate terminal in output console Checking this means that we get formatting and colours.

Debugging

To debug, we first set a breakpoint. On Mac, the keyboard shortcut is cmd+F8.

We can then start the debugger with Ctrl+D (Run -> Debug 'configuration' menu item, or press the green "bug" button in the top-right-hand toolbar). PyCharm will open a debug console and you can see you script starting. When the breakpoint is reached, the debug pane changes to Variables view where you can see all the variables currently in scope.

Useful operations from here are:

Step Over (F8) Executes the current line and moves to the next one. If the current line is a function call, then it executes the function call behind the scenes.
Step Into (F7) Executes the current line, but if this is a function call, then the debugger will enter the called function and execute the first line of that function call and stop there.
Step Out (Shift+F8) Run until the current function has finished and stop on the next line after where the current function was called.
Resume Program (Option+Cmd+R) Run the script until the next break point or until it completes.

Once I had this working, I was able to determine how the extension interoperated with Sphinx and work out what had gone wrong. In this particular case, it turned out to be the result of a change in Sphinx that was fixed in a later version that the version I was using.

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Part two of my article on using Serverless PHP using Bref has been published! In part one, I introduced Bref as we wrote a simple "Hello World" application.

Part follows this up exploring a more complete serverless application, my Project365 website. This S3 hosted static website is build using a serverless PHP function that connects to the Flickr API to retrieve my my one-photo-per-day images and present them on a single page per year. In the article I show how to use Bref to connect to a 3rd party API and use the AWS PHP SDK to update S3 and invalidate CloudFront caches.

The article is in the June 2019 issue of php[architect]. If you don't have a subscription, now may be a good time to take one out!

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

I regularly print to PDF on my Mac, This is done from the print dialog by selecting Save as PDF from the drop down in the bottom left of the dialog which is a bit of a pain to get using the mouse.

I recently discovered that I could create a keyboard shortcut to make this much easier.

In System Preferences -> Keyboard -> Shortcuts -> App Shortcuts you can create a keyboard shortcut to any menu item in any Mac application. This is one of those underrated clever features of macOS in my opinion and I love it.

What I didn't realise is that the PDF drop-down list on the print dialog is considered a menu too and so can be targeted with a shortcut key.

To add a shortcut, simply select App Shortcuts and click on the [+] button. Set the Menu Title to "Save as PDF" and set the keyboard shortcut to ⌘S/ We can use ⌘S as the Save menu item on File is disabled when the print dialog is active and so it doesn't clash.

To use, I now press ⌘P followed by ⌘S and I'm in the save dialog and can save my PDF.

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

I have a handy bash script that transcodes videos using Don Meton's video_transcoding tools. This script was written in a hurry and one limitation it had was that it re-transcoded any source file even if the output file already existed.

The script looked like this:

#!/usr/bin/env bash

readonly source_dir="${1:-MKV}"
readonly output_dir="${2:-MP4}"

for file in "$source_dir"/*.mkv; do
    ./transcode.sh "$file" "$output_dir"
done

What it should do is only run transcode.sh if the output file doesn't exist, so I updated it.

Parameter expansion to the rescue!

I needed to extract the base name and then construct the target filename in order to test for its existence. To extract the base name of the file, I used parameter expansion:

filename=${file##*/}
basename=${filename%.*}

The first line removes the directory portion of file, This works by using the ##[word] expansion which removes the largest prefix pattern which means that for */ it removes everything up to the last / in the string – i.e. any directory paths.

The second line removes the extension from the filename by using the %[word] expansion. This removes the smallest suffix pattern on the string: for .*, it removes the from the last . to the end of the string, and so removes the extension.

I can then create the target by concatenating the output_dir, basename and the .mp4 extension and then test for the target file's existence.

The final script is now:

#!/usr/bin/env bash

readonly source_dir="${1:-MKV}"
readonly output_dir="${2:-MP4}"

for file in "$source_dir"/*.jpg; do
    filename=${file##*/}
    basename=${filename%.*} 
    if [ ! -f "$target" ]; then
        ./transcode.sh "$file" "$output_dir"
    fi
done

It's a small change, but makes my life easier as I can run my script multiple times without needing to ensure that I've deleted any the source files for files that I've already transcoded.

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

I'm trying to tighten up the policies of my AWS Lambda function so that it only has access to the one S3 bucket that it needed, so I added an S3CrudPolicy with the BucketName referencing the bucket that's defined in the template.

The relevant part of template.yaml looks like this:

Resources:
    ImagesBucket:
        Type: AWS::S3::Bucket
        Properties:
            BucketName: !Sub "${ProjectName}-${UniqueKey}-images"

    ResizeFunction:
        Type: AWS::Serverless::Function
        Properties:
            FunctionName: resize
			# ...
            Events:
                CreateThumbnailEvent:
                    Type: S3
                    Properties:
                        Bucket: !Ref ImagesBucket
                        Events: s3:ObjectCreated:*
            Policies:
                - S3CrudPolicy:
                    BucketName: !Ref ImagesBucket

However, this creates an error:

Failed to create the changeset: Waiter ChangeSetCreateComplete failed: Waiter encountered
a terminal failure state Status: FAILED. Reason: Circular dependency between resources:
[ImagesBucket, ResizeFunctionRole, ResizeFunction, ResizeFunctionCreateThumbnailEventPermission]

(Who is this waiter anyway? This isn't a restaurant!)

Solution

To solve this, instead of referencing ImageBucket in the BucketName of the S3CrudPolicy, we can put the image name in directly. This is not the ARN, just the name, so we can do:

Policies:
                - S3CrudPolicy:
                    BucketName: !Sub "${ProjectName}-${UniqueKey}-images"

This works because by setting the BucketName to a string, there's no dependency on the ImagesBucket resource itself, so it all works.

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

When creating an API with AWS Lambda and Api Gateway, I discovered that a client request to a given resource with a verb that wasn't supported resulted in an unexpected response.

You can see this from this curl command to the /test resource which is only defined for GET:

$ curl -i -X PUT https://mh5rwr9q25.execute-api.eu-west-2.amazonaws.com/prod/test
HTTP/2 403
content-type: application/json
content-length: 42

{"message":"Missing Authentication Token"}

Given that I can GET the /Prod/hello resource, I would not expect to see 405 Method Not Allowed for PUT, and 403 Forbidden is a bit of a head-scratcher.

One way to handle this is to customise the Gateway Response. Gateway Responses are the set responses that API Gateway will return when it can't processing an incoming request. There are quite a few responses, and the one we want is MISSING_AUTHENTICATION_TOKEN.

What we are going to do is create an AWS::Serverless::Api resource in our template.yaml which sets a different status code and response for the MISSING_AUTHENTICATION_TOKEN response. This is a new ability of SAM version 1.11.0, so make sure you have at least that version.

Our template looks like this;

AWSTemplateFormatVersion: '2010-09-09'
Transform: AWS::Serverless-2016-10-31
Description: 'Test API'

Resources:
    MyApiName:
        Type: 'AWS::Serverless::Api'
        Properties:
            StageName: prod
            GatewayResponses:
                MISSING_AUTHENTICATION_TOKEN:
                    StatusCode: 405
                    ResponseTemplates:
                        "application/json": '{ "message": "Method Not Allowed" }'

    MyFunction:
        Type: AWS::Serverless::Function
        Properties:
            FunctionName: 'myapiname-test'
            Description: ''
            CodeUri: .
            Handler: index.php
            Timeout: 10
            MemorySize: 256
            Runtime: provided
            Layers:
                - 'arn:aws:lambda:eu-west-2:209497400698:layer:php-73:4'
            Events:
                HttpRoot:
                    Type: Api
                    Properties:
                        RestApiId: !Ref "MyApiName"
                        Path: /test
                        Method: GET

# This lets us retrieve the app's URL in the "Outputs" tab in CloudFormation
Outputs:
    MyApiName:
        Description: 'API URL in the Prod environment'
        Value: !Sub 'https://${MyApiName}.execute-api.${AWS::Region}.amazonaws.com/prod/'

Once we've deployed, a PUT request to our endpoint now returns the expected response:

$ curl -i -X PUT https://mh5rwr9q25.execute-api.eu-west-2.amazonaws.com/prod/test
HTTP/2 405
content-type: application/json
content-length: 35

{ "message": "Method Not Allowed" }

Warning

I should note that this solution isn't a panacea and introduces another problem. If you try to access an endpoint that doesn't exist, you also get a 405 back rather than the expected 404. Of course, before this change you got a 403, so it's wrong regardless. More work here is definitely needed.

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

In order to use Bref efficiently, I've developed a Makefile so that I don't have to remember all the various commands required. In particular, looking up the correct parameters to sam package & sam deploy is a pain and it's much easier to type make deploy and it all works as I expect.

It looks like this:

Makefile:

# vim: noexpandtab tabstop=4 filetype=make
.PHONY: list invoke invoke-local deploy outputs lastlog clean clean-all setup

REGION := eu-west-2
PROJECT_NAME := hello-world
UNIQUE_KEY := 1557903576

BUCKET_NAME := $(PROJECT_NAME)-$(UNIQUE_KEY)-brefapp
STACK_NAME := $(PROJECT_NAME)-$(UNIQUE_KEY)-brefapp

# default function to invoke. To override: make invoke FUNCTION=foo
FUNCTION ?= my-function

list:
	@$(MAKE) -pRrq -f $(lastword $(MAKEFILE_LIST)) : 2>/dev/null | awk -v RS= -F: '/^# File/,/^# Finished Make data base/ {if ($$1 !~ "^[#.]") {print $$1}}' | sort | egrep -v -e '^[^[:alnum:]]' -e '^$@$$'

invoke:
	vendor/bin/bref --region=$(REGION) invoke $(FUNCTION)

invoke-local:
	sam local invoke $(FUNCTION) --no-event

deploy:
	sam package \
		--region $(REGION) \
		--template-file template.yaml \
		--output-template-file .stack-template.yaml \
		--s3-bucket $(BUCKET_NAME)
	-sam deploy \
		--region $(REGION) \
		--template-file .stack-template.yaml \
		--stack-name $(STACK_NAME) \
		 --capabilities CAPABILITY_IAM
	vendor/bin/bref deployment --region $(REGION) $(STACK_NAME)

outputs:
	aws --region $(REGION) cloudformation describe-stacks --stack-name $(STACK_NAME) | jq '.Stacks[0]["Outputs"]'

lastlog:
	sam logs --region $(REGION) --name $(FUNCTION)

geterror:
	vendor/bin/bref deployment --region $(REGION) $(STACK_NAME)

clean:
	aws --region $(REGION) cloudformation delete-stack --stack-name $(STACK_NAME)

clean-all: clean
	aws --region $(REGION) s3 rb s3://$(BUCKET_NAME) --force

setup:
	aws --region $(REGION) s3 mb s3://$(BUCKET_NAME)

There's three variables that I need to set at the top:

  • REGION – The AWS region. This has to match the Bref layer used in template.yaml.
  • PROJECT_NAME – The name of the project. This is used as part of the S3 bucket and CloudFormation stack names
  • UNIQUE_KEY – A random string to ensure uniqueness for bucket and stack names. I tend to use the current time to the ms, but any string.

I've included a full-cycle set of targets so make setup will create the initial S3 bucket that's required for the project and then make deploy is used to deploy my project.

If I want to start again, make clean will remove the CloudFormation stack and make clean-all will remove the stack and the bucket.

I've also included a few utility targets:

  • make invoke FUNCTION=foo invokes the function foo on AWS.
  • make invoke-local FUNCTION=foo invokes the function foo on sam-local.
  • make outputs displays the outputs of the CloudFormation stack. This is useful for picking up the API Gateway URL for instance, if you set it up in your template.yaml.
  • make lastlog FUNCTION=foo displays the logs for the last invocation of the function foo.
Parameters for template.yaml

I pass the PROJECT_NAME and UNIQUE_KEY through to the template as the parameters ProjectName and UniqueKey respectively. These are then set in the Parameters section of the template:

template.yaml:

Parameters:
    ProjectName:
        Type: String
    UniqueKey:
        Type: String

I then use them in the template when I need uniqueness, such as when creating an S3 bucket:

template.yaml:

Resources:
    ImagesBucket:
        Type: AWS::S3::Bucket
        Properties:
            BucketName: !Join [ '-', [!Ref "ProjectName", !Ref "UniqueKey", "files" ] ]

Which creates a bucket named "hello-world-1557903576-files" which nicely complements "hello-world-1557903576-brefapp".

Read for later

Articles marked as Favorite are saved for later viewing.
close
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Separate tags by commas
To access this feature, please upgrade your account.
Start your free month
Free Preview