Loading...

I was so excited when it was first hinted in Denver that the next OpenStack PTG would be in Dublin. In my town! Zero jet lag! Commuting from home! Showing people around! Alas, it was not to be. Thanks, Beast from the East. Now everybody hates Ireland forever.

The weather definitely had some impact on sessions and productivity. People were jokingly then worryingly checking on the news, dropping in and out of rooms as they tried to rebook their cancelled flights. Still we did what we could and had snow-related activities too - good for building team spirit, if nothing else!

I mostly dropped in and out of rooms, here are some of my notes and sometimes highlights.

OpenStack Client

Like before, the first two days of the PTG were focused on cross-projects concerns. The OpenStack Client didn't have a room this time, which seems fair as it was sparsely attended the last couple of times - I would have thought there'd have been one helproom session at least but if there was I missed it.

I regret missing the API Working Group morning sessions on API discovery and micro-versions, which I think were relevant. The afternoon API sessions were more focused on services and less applicable for me. I need to be smarter about it next time.

First Contact SIG

Instead, that morning I attended the First Contact Special Interest Group sessions, which aim to make OpenStack more accessible to newcomers. It was well attended, with even a few new and want-to-be contributors who were first-time PTG attendes - I think having the PTG in Europe really helped with that. The session focused on making sure everyone in the room/SIG is aware of the resources that are out there, to be able to help people looking to get started.

The SIG is also looking for points of contact for every project, so that newcomers have someone to ask questions to directly (even better if there's a backup person too, but difficult enough to find one as it is!).

Some of the questions that came up from people in the room related to being able to map projects to IRC channel (e.g. devstack questions go to #openstack-qa).

Also, the OpenStack community has a ton of mentoring programs, both formal and informal and just going through the list to explain them took a while. Outreachy, Google Summer of Code, Upstream Institute, Women of OpenStack, First Contact Liaisons (see above). Didn't realise there were so many!

I remember when a lot of the initiatives discussed were started. It was interesting to hear the perspectives from people who arrived later, especially the discussions around the ones that have become irrelevant.

Packaging RPMs

On Tuesday I dropped by the packaging RPMs Working Group session. A small group made up of very focused RDO/Red Hat/SUSE people. The discussions were intense, with Python 2 going End Of Life in under 2 years now.

The current consensus seems to be to create a RPM-based Python 3 gate based on 3.6. There's no supported distro that offers this at the moment, so we will create our own Fedora-based distro with only what we need at the versions we need it. Once RDO is ready with this, it could be moved upstream.

There were some concerns about 3.5 vs 3.6 as the current gating is done on 3.5. Debian also appears to prefer 3.6. In general it was agreed there should not be major differences and generally ok.

The clients must still support Python 2.

There was a little bit of discussion about the stable policy and how it doesn't apply to the specs or the rpm-packaging project (I think the example was with Monasca and the default backend not working (?), so a spec change to modify the backend was backported - which could be considered a feature backport, but since the project isn't under the stable policy remit it could be done).

There was a brief chat at the end about whether there is still interest in packaging services, as opposed to just shipping them as containers. There certainly still seems to be at this point.

Release management

A much more complete summary has already been posted on the list, and I had to leave the session halfway to attend something else.

There seems to be an agreement that it is getting easier to upgrade (although some people still don't want to do it, perhaps an education effort is needed to help with this). People do use the stable point release tags.

The "pressure to upgrade": would Long-Term Support release actually help? Probably it would make it worse. The pressure to upgrade will still be there except there won't be a need to work on it for another year, and it'll make life worse for operators/etc submitting back fixes because it'll take over a year for a patch to make it into their system.

Fast-Forward Upgrade (which is not skip-level upgrades) may help with that pressure... Or not, maybe different problems will come up because of things like not restarting services in between upgrades. It batches things and helps to upgrade faster, but changes nothing.

The conversation moved to one year release cycles just before I left. It seemed to be all concerns and I don't recall active support for the idea. Some of the concerns:

  • Concerns about backports - so many changes
  • Concerns about marketing - it's already hard to keep up with all that's going on, and it's good to show the community is active and stuff is happening more than once a year. It's not that closely tied to releases though, announces could still go out more regularly.
  • Planning when something will land may become even harder as so much can happen in a year
  • It's painful for both people who keep up and people who don't, because there is so much new stuff happening at once.
TripleO

The sessions began with a retrospective on Wednesday. I was really excited to hear that tripleo-common was going to get unit tests for workflows. I still love the idea of workflows but I found them becoming more and more difficult to work with as they get larger, and difficult to review. Boilerplate gets copy-pasted, can't work without a few changes that are easy to miss unless manually tested and these get missed in reviews all the time.

The next session was about CI. The focus during Queens was on reliability, which worked well although promotions suffered as a result. There were some questions as to whether we should try to prevent people from merging anything when the promotion pipeline is broken but no consensus was really reached.

The Workflows session was really interesting, there's been a lot of Lessons Learnt from our initial attempt with Mistral this last couple of years and it looks like we're setting up for a v2 overhaul that'll get rid of many of the issues we found. Exciting! There was a brief moment of talk about ripping Mistral out and reimplementing everything in Ansible, conclusions unclear.

I didn't take good notes during the other sessions and once the venue closed down (snow!) it became a bit difficult to find people in the hotel and then actually hear them. Most etherpads with the notes are linked from the main TripleO etherpad.

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
jpichon.net by Jpichon - 1w ago

After 8 years of maintaining my lil' custom Django blog, it's time for a change! I'd been thinking about migrating for a while. After the first couple of years of excitement I started falling further and further behind framework upgrades, and my cute anti-spam system kicked the bucket a couple of years back, even though there never was much conversation on the blog. Drop me an email or a tweet if you want to chat about something here :)

I'd been postponing the migration because I thought it would be real painful to migrate both the content and keep the URL format the same, especially for a custom platform. It turned out to be really easy. Pelican rocks!

Migrating the content

Pelican comes with an import tool that supports bland little feeds like mine. By default my feed only displays 10 entries but since it's my code I just modified it locally to show them all. That probably ended up being one of the least straightforward parts of the process actually. I was super excited about Django when I first created the blog but not too familiar with how to manage Python dependencies. Thus, although I did write down the dependency names in a text file I wasn't forward looking enough to include version numbers. pip freeze is my friend now. Thankfully I only had a couple of plugins to play guess-what-version at.

I did end up making a couple of changes to Pelican locally so it would work better with my content (yay open-source).

First, to avoid the <pre> code snippets getting mangled with no linebreaks I ended up commenting out a few lines in fields2pelican() that look like they're meant to ensure the validity of the original HTML. I was using a wizard in the old blog so there's no reason it shouldn't be. I wasn't too worried about it and didn't notice side-effects during the migration.

Secondly, the files weren't created with the correct slugs and filenames, which caused some issues when rewriting the URLs. It looks like the feed parser doesn't look at the real slug so I figured out where the URL was at in feed2fields() (in entry.id for me) and changed the slug = slugify(entry.title) line to break down that value and extract the real slug.

Adjusting the content

Now, I use tags quite liberally and on the feed that was marked with "Tagged with: blah, bleh, bloh" at the end of an article. I wrote a short script to scrap that line from the rst files created in the previous step, add the discovered tags to :tags: in the metadata and remove the 'Tagged with' line. That was fun! The script is ugly and bugs were found along the way, but it did the job and now it even works when there are so many tags on an entry that they're spread over several lines ;)

Rewriting the URLs

I don't know if I should even give this a heading. Figuring out rewrite rules was giving me cold sweats but it turns out Pelican gives you handy settings out of the box to have your URLs look like whatever you want. It's really easy. I mean, I don't think I broke anything?!

Except the feeds, but after some thinking that's something I decided to do on purpose. The blog has ended up aggregated in a lot of places I don't even remember, and I was really concerned about 8 years of entries somehow getting newer timestamps and flooding the planets I'm on. So, brand new feeds. I'll update the two or three planets I remember being a part of, and the others as I find them or they find me again :)

Going mad with sed

After putting what I had so far on a temporary place online, a couple of additional issues popped up:

  • When the feed was imported, some of the internal URLs were copied as full URLs rather than relative ones. That means there were a bunch of references to http://localhost:8000, since I'd used a local copy of the feed.
  • The theme, images and most of the links didn't work because they expected the site to start at / but I was working off a temporary sub-directory for the test version.

I've never used sed so much in my life. I'm going to be an expert at it for the next three days at least, until I forget it all again. Here, writing some of them down now for future-me when how to use groups becomes a distant memory:

# Fix the images!
$ for f in `grep -rl "image:: http:\/\/localhost:8000" *`; do  sed -i 's/image:: http:\/\/localhost:8000/image:: {filename}/g' "$f"; done

# Fix the internal links!
$ sed -i 's/<\/blog\/[0-9]*\/[0-9]*/<{filename}\/Tech/g' content/Tech/*
$ sed -i 's/\({filename.*\)\/>`__/\1.rst>`__/g'

# Fix the tags!
$ for f in `grep -rl /tag/ *`; do  sed -i 's/\/tag\/\(.*\)\//{tag}\1/g' $f; done

I think I had to do a bunch of other ad-hoc modifications. I also expect to find more niggles which I'll fix as I see them, but for now I'm happy with the current shape of things. I can't overstate how much easier this was than I expected. The stuff that took the most time (remembering how to run the custom blog code locally, importing tags, sedding all the things) was nearly all self-inflicted, and the whole process was over in a couple of evenings.

Blogging from emacs

Sure feels nice.

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Here are a couple of notes about the OpenStack Client, taken while dropping in and out of the room during the OpenStack PTG in Denver, a couple of weeks ago.

OSC 4

The original plan was to simply get rid of deprecated stuff, change a few names here and there and have few compatibility breaking changes.

However, now shade may adopt the SDK and move some of its contents into it. Then shade would consume the SDK, and OSC would consume it as well. It would be pretty clean and easy to use, but would mean major breaking changes for OSC4. OSC would become a shim layer over osc-lib. The plugin interface is going to change, as the loading time is long - every command requires loading all of the plugins which takes over half of the loading time even though the commands themselves load quickly. (There will be more communication once we understand what the new plugin interface will look like.) OSC4 would rip out global argument processing and adopt os-client-config(breaking change). It would adopt the SDK and stop using the client libraries.

Note that this may all change depending on how the SDK situation evolves.

From the end-user perspective, some option names will change. There is some old cruft left around for compatibility reasons that will disappear (e.g. "ip floating" will be gone, it changed a year ago to "floating ip"). The column output will handle structured data better and some of this is already commited to the osc4 feature branch.

The order of commands will not be changed.

For authentication, the bevahiour may change a bit between the CLI behaviour or clouds.yaml. os-client-config came along and changed a few things, notably with regard to precedence. The OSC way of doing will be removed and replaced with OCC.

Best effort will be made not to break scripts. The "configuration show" command shows your current configuration but not where it comes from - it's a bit hard to do because of all the merging of parameters going on.

The conversation continued about auth, how shade uses adapters and may change the SDK to use them as well: would sessions or adapters make the most sense? I had to attend another session and missed the discussion and conclusions.

Command aliases

There was a long discussion around command aliases, as some commands are very long to type (e.g. healthmonitor). It was very clear it's not something OSC wants to get into the business of managing itself (master list of collisions, etc) so it would be up to individual plugins. There could be individual .osc config file that would do the short to long name mapping, similar to a shell alias. It shouldn't be part of the official plugin (otherwise, "why don't we just use those names to begin with?") but it could be another pluging that sets up alias mappings to the short name or a second set of entry points, or include a "list of shortcuts we found handy" in the documentation. Perhaps there should be a community-wide discussion about this.

Collisions are to be managed by users, not by OSC. Having one master list to manage the initial set of keywords is already an unfortunate compromise.

Filtering and others

It's not possible to do filtering on lists or any kind of complex filtering at the moment. The recommendation, or what people currently do, is to output to --json and pipe the output to jq to do what they need. The documentation should be extended to show how to do this.

At the moment filtering varies wildly between APIs and none of them are very expressive, so there isn't a lot OSC can do.

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Yesterday, as part of the TripleO Deep Dives series I gave a short introduction to internationalisation in TripleO UI: the technical aspects of it, as well as a quick overview of how we work with the I18n team.

You can catch the recording on BlueJeans or YouTube, and below's a transcript.

~

Life and Journey of a String

Internationalisation was added to the UI during Ocata - just a release ago. Florian implemented most of it and did the lion's share of the work, as can be seen on the blueprint if you're curious about the nitty-gritty details.

Addition to the codebase

Here's an example patch from during the transition. On the left you can see how things were hard-coded, and on the right you can see the new defineMessages() interface we now use. Obviously new patches should directly look like on the right hand-side nowadays.

The defineMessages() dictionary requires a unique id and default English string for every message. Optionally, you can also provide a description if you think there could be confusion or to clarify the meaning. The description will be shown in Zanata to the translators - remember they see no other context, only the string itself.

For example, a string might sound active like if it were related to an action/button but actually be a descriptive help string. Or some expressions are known to be confusing in English - "provide a node" has been the source of multiple discussions on list and live so might as well pre-empt questions and offer additional context to help the translators decide on an appropriate translation.

Extraction & conversion

Now we know how to add an internationalised string to the codebase - how do these get extracted into a file that will be uploaded to Zanata?

All of the following steps are described in the translation documentation in the tripleo-ui repository. Assuming you've already run the installation steps (basically, npm install):

$ npm run build

This does a lot more than just extracting strings - it prepares the code for being deployed in production. Once this ends you'll be able to find your newly extracted messages under the i18n directory:

$ ls i18n/extracted-messages/src/js/components

You can see the directory structure is kept the same as the source code. And if you peek into one of the files, you'll note the content is basically the same as what we had in our defineMessages() dictionary:

$ cat i18n/extracted-messages/src/js/components/Login.json
[
  {
    "id": "UserAuthenticator.authenticating",
    "defaultMessage": "Authenticating..."
  },
  {
    "id": "Login.username",
    "defaultMessage": "Username"
  },
  {
    "id": "Login.usernameRequired",
    "defaultMessage": "Username is required."
  },
[...]

However, JSON is not a format that Zanata understands by default. I think the latest version we upgraded to, or the next one might have some support for it, but since there's no i18n JSON standard it's somewhat limited. In open-source software projects, po/pot files are generally the standard to go with.

$ npm run json2pot

> tripleo-ui@7.1.0 json2pot /home/jpichon/devel/tripleo-ui
> rip json2pot ./i18n/extracted-messages/**/*.json -o ./i18n/messages.pot

> [react-intl-po] write file -> ./i18n/messages.pot ✔️

$ cat i18n/messages.pot
msgid ""
msgstr ""
"POT-Creation-Date: 2017-07-07T09:14:10.098Z\n"
"Content-Type: text/plain; charset=UTF-8\n"
"Content-Transfer-Encoding: 8bit\n"
"MIME-Version: 1.0\n"
"X-Generator: react-intl-po\n"


#: ./i18n/extracted-messages/src/js/components/nodes/RegisterNodesDialog.json
#. [RegisterNodesDialog.noNodesToRegister] - undefined
msgid ""No Nodes To Register""
msgstr ""

#: ./i18n/extracted-messages/src/js/components/nodes/NodesToolbar/NodesToolbar.json
#. [Toolbar.activeFilters] - undefined
#: ./i18n/extracted-messages/src/js/components/validations/ValidationsToolbar.json
#. [Toolbar.activeFilters] - undefined
msgid "Active Filters:"
msgstr ""

#: ./i18n/extracted-messages/src/js/components/nodes/RegisterNodesDialog.json
#. [RegisterNodesDialog.addNew] - Small button, to add a new Node
msgid "Add New"
msgstr ""

#: ./i18n/extracted-messages/src/js/components/plan/PlanFormTabs.json
#. [PlanFormTabs.addPlanName] - Tooltip for "Plan Name" form field
msgid "Add a Plan Name"
msgstr ""
[...]

This messages.pot file is what will be automatically uploaded to Zanata.

Infra: from the git repo, to Zanata

The following steps are done by the infrastructure scripts. There's infra documentation on how to enable translations for your project, in our case as the first internationalised JavaScript project we had to update the scripts a little as well. This is useful to know if an issue happens with the infra jobs; debugging will probably bring you here.

The scripts live in the project-config infra repo and there are three files of interest for us:

In this case, upstream_translation_update.sh is the file of interest to us: it simply sets up the project on line 76, then sends the pot file up to Zanata on line 115.

What does "setting up the project" entails? It's a function in common_translations_update.sh, that pretty much runs the steps we talked about in the previous section, and also creates a config file to talk to Zanata.

Monitoring the post jobs

Post jobs run after a patch has already merged - usually to upload tarballs where they should be, update the documentation pages, etc, and also upload messages catalogues onto Zanata. Being a 'post' job however means that if something goes wrong, there is no notification on the original review so it's easy to miss.

Here's the OpenStack Health page to monitor 'post' jobs related to tripleo-ui. Scroll to the bottom - hopefully tripleo-ui-upstream-translation-update is still green! It's good to keep an eye on it although it's easy to forget. Thankfully, AJaeger from #openstack-infra has been great at filing bugs and letting us know when something does go wrong.

Debugging when things go wrong: an example

We had a couple of issues whereby a linebreak gets introduced into one of the strings, which works fine in JSON but breaks our pot file. If you look at the content from the bug (the full logs are no longer accessible):

2017-03-16 12:55:13.468428 | + zanata-cli -B -e push --copy-trans False
[...]
2017-03-16 12:55:15.391220 | [INFO] Found source documents:
2017-03-16 12:55:15.391405 | [INFO]            i18n/messages
2017-03-16 12:55:15.531164 | [ERROR] Operation failed: missing end-quote

You'll notice the first line is the last function we call in the upstream_translation_update.sh script; for debugging that gives you an idea of the steps to follow to reproduce. The upstream Zanata instance also lets you create toy projects, if you want to test uploads yourself (this can't be done directly on the OpenStack Zanata instance.)

This particular newline issue has popped up a couple of times already. We're treating it with band-aids at the moment, ideally we'd get a proper test on the gate to prevent it from happening again: this is why this bug is still open. I'm not very familiar with JavaScript testing and haven't had a chance to look into it yet; if you'd like to give it a shot that'd be a useful contribution :)

Zanata, and contributing translations

The OpenStack Zanata instance lives at https://translate.openstack.org. This is where the translators do their work. Here's the page for tripleo-ui, you can see there is one project per branch (stable/ocata and master, for now). Sort by "Percent Translated" to see the languages currently translated. Here's an example of the translator's view, for Spanish: you can see the English string on the left, and the translator fills in the right side. No context! Just strings.

At this stage of the release cycle, the focus would be on 'master,' although it is still early to do translations; there is a lot of churn still.

If you'd like to contribute translations, the I18n team has good documentation about how to go about how to do it. The short version: sign up on Zanata, request to join your language team, once you're approved - you're good to go!

Return of the string

Now that we have our strings available in multiple languages, it's time for another infra job to kick in and bring them into our repository. This is where propose_translation_update.sh comes in. We pull the po files from Zanata, convert them to JSON, then do a git commit that will be proposed to Gerrit.

The cleanup step does more than it might seem. It checks if files are translated over a certain ratio (~75% for code), which avoids adding new languages when there might only be one or two words translated (e.g. someone just testing Zanata to see how it works). Switching to your language and yet having the vast majority of the UI still appear in English is not a great user experience.

In theory, files that were added but are now below 40% should get automatically removed, however this doesn't quite work for JavaScript at the moment - another opportunity to help! Manual cleanups can be done in the meantime, but it's a rare event so not a major issue.

Monitoring the periodic jobs

Zanata is checked once a day every morning, there is an OpenStack Health page for this as well. You can see there are two jobs at the moment (hopefully green!), one per branch: tripleo-ui-propose-translation-update and tripleo-ui-propose-translation-update-ocata. The job should run every day even if there are no updates - it simply means there might not be a git review proposed at the end.

We haven't had issues with the periodic job so far, though the debugging process would be the same: figure out based on the failure if it is happening at the infra script stage or in one of our commands (e.g. npm run po2json), try to reproduce and fix. I'm sure super-helpful AJaeger would also let us know if he were to notice an issue here.

Automated patches

You may have seen the automated translations updates pop up on Gerrit. The commit message has some tips on how to review these: basically don't agonise over the translation contents as problems there should be handled in Zanata anyway, just make sure the format looks good and is unlikely to break the code. A JSON validation tool runs during the infra prep step in order to "prettify" the JSON blob and limit the size of the diffs, therefore once the patch  makes it out to Gerrit we know the JSON is well-formed at least.

Try to review these patches quickly to respect the translators' work. Not very nice to spend a lot of time on translating a project and yet not have your work included because no one was bothered to merge it :)

A note about new languages...

If the automated patch adds a new language, there'll be an additional step required after merging the translations in order to enable it: adding a string with the language name to a constants file. Until recently, this took 3 or 4 steps - thanks to Honza for making it much simpler!

This concludes the technical journey of a string. If you'd like to help with i18n tasks, we have a few related bugs open. They go from very simple low-hanging-fruits you could use to make your first contribution to the UI, to weird buttons that have translations available yet show in English but only in certain modals, to the kind of CI resiliency tasks I linked to earlier. Something for everyone! ;)

Working with the I18n team

It's really all about communication. Starting with...

Release schedule and string freezes

String freezes are noted on the main schedule but tend to fit the regular cycle-with-milestones work. This is a problem for a cycle-trailing project like tripleo-ui as we could be implementing features up to 2 weeks after the other projects, so we can't freeze strings that early.

There were discussions at the Atlanta PTG around whether the I18n should care at all about projects that don't respect the freeze deadlines. That would have made it impossible for projects like ours to ever make it onto the I18n official radar. The compromise was that cycle-trailing project should have a I18n cross-project liaison that communicates with the I18n PTL and team to inform them of deadlines, and also to ignore Soft Freeze and only do a Hard Freeze.

This will all be documented under an i18n governance tag; while waiting for it the notes from the sessions are available for the curious!

What's a String Freeze again?

The two are defined on the schedule: soft freeze means not allowing changes to strings, as it invalidates the translator's work and forces them to retranslate; hard freeze means no additions, changes or anything else in order to give translators a chance to catch up.

When we looked at Zanata earlier, there were translation percentages beside each language: the goal is always the satisfaction of reaching 100%. If we keep adding new strings then the goalpost keeps moving, which is discouraging and unfair.

Of course there's also an "exception process" when needed, to ask for permission to merge a string change with an explanation or at least a heads-up, by sending an email to the openstack-i18n mailing list. Not to be abused :)

Role of the I18n liaison

...Liaise?! Haha. The role is defined briefly on the Cross-Projects Liaison wiki page. It's much more important toward the end of the cycle, when the codebase starts to stabilise, there are fewer changes and translators look at starting their work to be included in the release.

In general it's good to hang out on the #openstack-i18n IRC channel (very low traffic), attend the weekly meeting (it alternates times), be available to answer questions, and keep the PTL informed of the I18n status of the project. In the case of cycle-trailing projects (quite a new release model still), it's also important to be around to explain the deadlines.

A couple of examples having an active liaison helps with:

  • Toward the end or after the release, once translations into the stable branch have settled, the stable translations get copied into the master branch on Zanata. The strings should still be fairly similar at that point and it avoids translators having to re-do the work. It's a manual process, so you need to let the I18n PTL know when there are no longer changes to stable/*.
  • Last cycle, because the cycle-trailing status of tripleo-ui was not correctly documented, a Zanata upgrade was planned right after the main release - which for us ended up being right when the codebase had stabilised enough and several translators had planned to be most active. Would have been solved with better, earlier communication :)
Post-release

After the Ocata release, I sent a few screenshots of tripleo-ui to the i18n list so translators could see the result of their work. I don't know if anybody cared :-) But unlike Horizon, which has an informal test system available for translators to check their strings during the RC period, most of the people who volunteered translations had no idea what the UI looked like. It'd be cool if we could offer a test system with regular string updates next release - maybe just an undercloud on the new RDO cloud? Deployment success/failures strings wouldn't be verifiable but the rest would, while the system would be easier to maintain than a full dev TripleO environment - better than nothing. Perhaps an idea for the Queens cycle!

The I18n team has a priority board on the Zanata main page (only visible when logged in I think). I'm grateful to see TripleO UI in there! :) Realistically we'll never move past Low or perhaps Medium priority which is fair, as TripleO doesn't have the same kind of reach or visibility that Horizon or the installation guides do. I'm happy that we're included! The OpenStack I18n team is probably the most volunteer-driven team in OpenStack. Let's be kind, respect string freezes and translators' time! \o/

</braindump>

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

I've had to set this up on three computers in the last couple of weeks and I forgot both how to do it and how to google for it effectively every single time, so here it goes as a memo to future-self for next time.

If your git status shows gibberish like this for non-ASCII file names:

$ git status
On branch master
[...]

    modified:   "\350\265\244\343\201\204\346\214\207"

The solution is to disable quotes in paths:

$ git config --global core.quotePath false

Tadam:

$ git status
On branch master
[...]

    modified:  赤い指
Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Separate tags by commas
To access this feature, please upgrade your account.
Start your free year
Free Preview