Posted 1 day ago by Jim Chen
Over the last few months, I've been working on an improved App Not Responding (ANR) dashboard for Fennec, which is now hosted at telemetry.mozilla.org/hang/anr. With the help of many people, I'm glad to say that the dashboard is now mature enough to ... [More] be a useful tool for Fennec developers.
The idea of ANR/hang reporting is similar to crash reporting — every time the Fennec UI becomes unresponsive for more than five seconds, Android would show an “App Not Responding” dialog; the ANR Reporter detects this condition and collects these information about the hang:
Stacks for Java threads in Fennec
Stacks for Gecko threads (C++ stacks and profiler pseudo-stacks)
System information listed in about:telemetry
Fennec logs to help debug the hang
The ANR Reporter is enabled on Nightly and Aurora builds only, and if the user has not opted out of telemetry, the collected information is sent back to Mozilla, where the data are aggregated and presented through the ANR Dashboard. Because the debug logs may contain private information, they are not processed and are only available internally, within Mozilla.
The ANR Dashboard presents weekly aggregated data collected through the ANR reporter. Use the drop-down list at the top of the page to choose a week to display.
Data for each week are then grouped by certain parameters from ANR reports. The default grouping is “appName”, and because ANR reports are specific to Fennec, you only see one column in the top hangs chart labeled “Fennec”. However, if you choose to group by, for example, “memsize”, you will see many columns in the chart, with each column representing a different device memory size seen from ANR reports.
Each column in the top hangs chart shows the number of hangs, and each column is further divided into blocks, each representing a different hang. Hover over the blocks to see the hang stack and the number of hangs. This example shows 8 hangs with that signature occurred on devices with 768MB of memory over the past week.
Colors are preserved across columns, so the same colored blocks all represent the same hang. The blue blocks at the bottom represent all hangs outside of the top 10 list.
To the right of the top hangs chart is the distributions chart. It shows how different parameters are distributed for all hangs. Hover over each block to see details. This example shows 36% of all hangs occurred on devices running Android API level 15 (corresponding to Android 4.0.3-4.0.4 Ice Cream Sandwich) over the past week.
The distributions chart can also be narrowed down to specific groups. This would let us find out, for example, on devices having 1GB of memory, what is the percentage of hangs occurring on the Nightly update channel.
Clicking on a block in the top hangs chart bring up a Hang Report. The hang report is specific to the column that you clicked on. For example, if you are grouping by “memsize”, clicking on a hang in the “1G” column will give you one hang report and clicking on the same hang in the “2G” column will give you a different hang report. Switch grouping to “appName” if you want to ignore groups — in that case there is only one column, “Fennec”.
The hang report also contains a distributions chart specific to the hang. The example above shows that 14% of this hang occurred on Nexus 7 devices.
In addition, the hang report contains a builds chart that shows the frequency of occurrence for different builds. This example shows there was one hang from build 20140224030203 on the 30.0a1 branch over the past week. The chart can be very useful when verifying that a hang has been fixed in newer builds.
Last but not least, the hang report contains stacks from the hang. The stacks in the hang report are more detailed than the stack shown on the main page. You can also look at stacks from other threads — useful for finding deadlocks!
When comparing the volume of hangs, a higher number can mean two things — the side with higher number is more likely to hang, or the side with higher number has more usage. For example, if we are comparing hangs between devices A and B, and A has a higher number of hangs. It is possible that A is more prone to hanging; however, it is also possible that A simply has more users and therefore more chances for hangs to occur.
To provide better comparisons, the ANR Dashboard has a normalization feature that tries to account for usage. Once “Normalize” is enabled at the top of the dashboard, all hang numbers in the dashboard will be divided by usage as measured by reported uptime. Instead of displaying the raw number of hangs, the top hangs chart will display the number of hangs per one thousand user-hours. For example, 10 hangs per 1k user-hour means, on average, 1000 users each using Fennec for one hour will experience 10 hangs combined; or equivalently, one user using Fennec for 1000 hours will experience 10 hangs total. The distributions chart is also updated to reflect usage.
As a demonstration, the image below shows un-normalized hangs grouped by device memory size. There is no clear trend among the different values.
The image below shows normalized hangs based on the same data. In this case, it is clear that, once usage is accounted for, higher device memory size generally corresponds to lower number of hangs. Note that the “unknown” column became hidden because there is not enough usage data for devices with “unknown” memory size.
At the moment, I think uptime is the best available measurement for usage. Hopefully there will be a better metric in the future to provide more accurate results. Or let me know if it already exists! [Less]
Posted 1 day ago by nore...@blogger.com (Jess)
After several months of hard work by the Open Badges team, we are announcing that BadgeKit is available for access to Private Beta. This means that BadgeKit is now available in two forms: a hosted version of Mozilla BadgeKit available in private ... [More] beta for select partner organizations that meet specific technical requirements, and anyone can download the code from GitHub and implement it on their own servers.
BadgeKit is a set of open, foundational tools to make the badging process easy. It includes tools to support the entire process, including badge design, creation, assessment and issuing, remixable badge templates, milestone badges to support leveling up, and much more. The tools are open source and have common interfaces to make it easy to build additional tools or customizations on top of the standard core, or to plug in other tools or systems.
From a design perspective, this milestone represents refinements in user research and testing, user experience, user interface and branding.
We did user testing with members of the Hive in Brooklyn.In preparation for this release, we conducted extensive user research to define the needs and goals for badge issuers. This work, led by Emily Goligoski, helped to define requirements for the BadgeKit offering as well as inform the user experience. The research was done using a variety of methodologies, however, it is worth noting that all of this work was done in the open. Emily organized distributed user testing in key markets such as New York, Chicago and Toronto to do everything from needs analysis to accessibility and functionality testing. The Open Badges weekly community calls were leveraged to pull in input from the highly motivated research and practitioner cohorts. Much of the work is documented both on her blog and in github. We paired every implementation milestone with some form of user testing and iteration. While this may sound obvious, it was a new way of working for our team, and I can unequivocally say that the product is better because of this practice. User research and testing did not happen in a bubble, but rather it became completed integrated with our design and implementation cycle. As a result, developers and designers became comfortable making informed iterations on the offering, as developers, designers and team researchers all participated in some form of user testing over the past three months.
As a direct result of the extensive research and testing, the user experience for the entire BadgeKit offering was deeply refined. This work, led by Matthew Willse introduced some new features, such as badge “templates” which give the ability for any badge issuer to clone a badge template and remix it. This gives us the unique ability to offer template packages based on common badge requests from the community, as well as eventually to empower the large Open Badges ecosystem to develop badge templates of their own (and perhaps explicitly state how they are comfortable with their content being shared and remixed). One component of this work that evolved as a direct result of testing, was the increased attention to copy. Sue Smith led this work, which entailed everything from tool tip development and a glossary to API documentation. Considering that BadgeKit takes an issuer from badge definition
and visual design
to assessment and issuing,
designing the user experience was no small effort and the attention to detail combined with designing in the open - proved to be a solid approach for the team.
Perhaps the most obvious design component of this release is the user interface design and brand definition. Adil Kim kicked off this work with an exploration of the brand identity. BadgeKit is under the parent brand of OpenBadges, which sits under the even larger parent brand of Mozilla - which gave us the constraints of designing within the brand guidelines. After exploring options to represent the visual metaphor for this modular system, here is the new logo:
The logo is meant to evoke the imagery of both a badge as well as a tool in one glance. For the untrained craftsperson (ahem) - while gazing into the mark - you will see a bolt . This connotes that BadgeKit is a tool, something that allows you to dive into the details and construct a badge, and a system for your community. The logo incorporates the palette from Mozilla Open Badges, in a playful mobius - at once implying that while this is a handcrafted experience, it is also a seamless one. This logo nicely fits into the larger brand family while reading on it’s own, as if to say, “hey, BadgeKit is the offering for badge MAKERS, dive in and get your hands dirty!”
The brand is in turn extended to user interface design. The overall art direction here was that this needs to be clean, yet approachable. We know that many organizations will not be using all of the components in the interface directly on badgekit.org, however, the design needs to take into account that everything needs to be accessible and read as remixable. Some details to note here are the simplified navigation, the palette and subtle details like the ability to zoom on hover over thumbnails.
It’s worth noting that while Emily, Matthew, Sue and Adil , as well as Carla, Meg, Erin, Jade, Sabrina Ng, Chloe and Sunny were invested in much of this design work, there was an intentional yet organic partnership with the developers (Zahra, Erik, Andrew, Chris, Mavis Ou, Mike and Brian + many, many community contributors) who were doing the implementation. We had weekly critiques of the work and often engaged in conversation about design as well as implementation on github.
Another component of this work is looking ahead towards future features. Chloe Varelidi lead work here thinking through the potential for badge and skill discovery. Under a grant from The Bill & Melinda Gates Foundation, Chloe and her team are thinking through ways to represent earner pathways. This eventually will be leveled up into the core BadgeKit offering, but you can start to dip your toes into those features by checking out the work here.
And the good news is that design never ends! Design isn’t just a destination, it’s an invitation to a conversation. Check it out, let us know what’s working and importantly, what’s not. [Less]
Posted 1 day ago
Last week I built a little tools called github-pr-triage. It's a single page app that sits on top of the wonderful GitHub API v3.
Its goal is to try to get an overview of what needs to happen next to open pull requests. Or rather, what ... [More] needs to happen next to get it closed. Or rather, who needs to act next to get it closed.
It's very common, at least in my team, that someone puts up a pull request, asks someone to review it and then walks away from it. She then doesn't notice that perhaps the integrated test runner fails on it and the reviewer is thinking to herself "I'll review the code once the tests don't fail" and all of a sudden the ball is not in anybody's court. Or someone makes a comment on a pull request that the author of the pull requests misses in her firehose of email notifictions. Now she doesn't know that the comment means that the ball is back in her court.
Ultimately, the responsibility lies with the author of the pull request to pester and nag till it gets landed or closed but oftentimes the ball is in someone elses court and hopefully this tool makes that clearer.
Here's an example instance: https://prs.paas.allizom.org/mozilla/socorro
Currently you can use prs.paas.allizom.org for any public Github repo but if too many projects eat up all the API rate limits we have I might need to narrow it down to use mozilla repos. Or, you can simply host your own. It's just a simple Flask server
About the technology
I'm getting more and more productive with Angular but I still consider myself a beginner. Saying that also buys me insurance when you laugh at my code.
So it's a single page app that uses HTML5 pushState and an angular $routeProvider to make different URLs.
The server simply acts as a proxy for making queries to api.github.com and bugzilla.mozilla.org/rest and the reason for that is for caching.
Every API request you make through this proxy gets cached for 10 minutes. But here's the clever part. Every time it fetches actual remote data it stores it in two caches. One for 10 minutes and one for 24 hours. And when it stores it for 24 hours it also stores its last ETag so that I can make conditional requests. The advantage of that is you quickly know if the data hasn't changed and more importantly it doesn't count against you in the rate limiter. [Less]
Posted 1 day ago by Pierros Papadeas
As we examined with the intro post, the need for contribution activity metrics for different contribution areas in Mozilla has been high. It was only logical that many attempts were made to address this issue, mainly on the area-level (and not in ... [More] Mozilla-wide level). Almost all of them had zero interaction between each other, and there was a general lack of vision for an holistic approach to the problem.
After one of our initial gatherings as the (then meta-) Community Building Team, a couple of people brainstormed together a possible solution to our problem. Together with Josh Matthews, Giorgos Logiotatidis, Ricky Rosario and Liz Henry a new approach was born. Enter project Blackhole!
Project Blackhole was a collaborative effort to develop and maintain an infrastructure of gathering and serving raw contribution data within Mozilla. We created a data architecture and flow together with a data Schema and specification to describe contribution activities for the first time in Mozilla. The project went far enough (thanks to Josh) to create a working prototype for back-end and front-end.
What went right:
Having a single project to drive multiple metrics efforts forward got people engaged. Everyone saw the value of de-duplicating efforts and tapping into that as a resource. Also during the process of designing and testing it we were able to self-identify as a group of people that share interest and commitment towards a common goal. Most of those people went on to become active members of the Systems and Data Working Group. Finally, we ended up with a common language and descriptions around contribution activities, a really valuable asset to have for the future of cross-project tracking.
What went wrong:
Building *anything* from scratch can be hard. Really hard. First, everyone (rightfully) questions the need to build something instead of re-using what is out there. Once you get everyone on board, development and deployment resources are hard to find especially on a short notice. On top of that Blackhole’s architecture *seemed* logical enough in theory, but was never tested in scale so everyone involved was not 100% sure that our architecture would survive stress tests and the scale of Mozilla’s contribution ecosystem.
PRO TIP: Changing the project name does not help. We went from “Blackhole” to “Wormhole” (and back to “Blackhole”?), to better reflect the proposed data flow (data would not disappear forever!) and people got confused. Really confused. Which is obviously something that is not helpful during conversations. Pick a name, and stick to it!
Lack of a team dedicated to it and inability to get the project listed as a personal goal of people (or teams), halted any progress leading us to a fearsome dead-end.
What we learned:
As with most failures, this one was also really valuable. We learned that:
we need to be a top line goal for people and teams
we need to examine really well what is out there (internally or externally to Mozilla) and investigate the possibility of re-using it.
we need a clear and common language to make communications as effective as possible
we need to be inclusive in all our procedures as a working group, with volunteers as well as all paid staff.
and in true Mozilla fashion: we need to start small, test and iterate with a focus on modularity.
A way forward?
Having those lessons learned from the process, we sat down last December as a group and re-aligned. We addressed all 5 issues and now we are ready to move forward. And the name of it? Baloo. Stay tuned for more info on our next detailed post
Posted 1 day ago by ahal
tl;dr - It is possible to add more mach to your B2G repo! To get started, install pip:
$ wget https://raw.github.com/pypa/pip/master/contrib/get-pip.py -O - | python
$ pip ... [More] install b2g-commands
To play around with it, cd to your B2G repo and run:
$ git pull # make sure repo is up to date
$ ./mach help # see all available commands
$ ./mach help <command> # see additional info about a command
Most people who spend the majority of their time working within mozilla-central have probably been
acquainted with mach. In
case you aren't acquainted, mach is a generic command dispatching tool. It is possible to write scripts
called 'mach targets' which get registered with mach core and transformed into commands. Mach targets
in mozilla-central have access to all sorts of powerful hooks into the build and test infrastructure
which allow them to do some really cool things, such as bootstrapping your environment, running builds
and tests, and generating diagnostics.
A contributor (kyr0) and I have been working on a side project called b2g-commands
to start bringing some of that awesomeness to B2G. At the moment b2g-commands wraps most of the major
B2G shell scripts, and provides some brand new ones as well. Here is a summary of its current features:
Bootstrap your environment - sets up system packages needed to build (includes setting up gcc-4.6)
Easy to discover arguments - no need to memorize or look up random environment variables
Helpful error messages where possible - clear explanations of what went wrong and how to fix it
Fully compatible with existing build system including .userconfig
List Android vendor ids for udev rules
Clobber objdir/out directories
I feel it's important to re-iterate, that this is *not* a replacement for the current build system. You
can have b2g-commands installed and still keep your existing workflows if desired. Also important to note is
that there's a good chance you'll find bugs (especially related to the bootstrap command on varying platforms),
or arguments missing from your favourite commands. In this case please don't hesitate to contact me or
file an issue. Or, even better, submit a pull
If the feature set feels a bit underwhelming, that's because this is just a first iteration. I think
there is a lot of potential here to add some really useful things.
Unfortunately, this is just a side project I've been working on and I don't have as much time to devote
to it as I would like. So I encourage you to submit pull requests (or at least submit an issue) for any
additional functionality you would like to see. In general I'll be very open to adding new features.
In the end, because this module lives outside the build system, it will only ever be able to wrap existing
commands or create new ones from scratch. This means it will be somewhat limited in what it is capable of
providing. The targets in this module don't have the same low-level hooks into the B2G and gaia repos like
the targets for desktop do into gecko. My hope is that if a certain feature in this module turns out to
be especially useful and/or widely used it'll get merged into the B2G repo and be available by default.
Eventually my hope is that we implement some deeper mach integration into the various B2G repos (especially
gaia) which would allow us to create even more powerful commands. I guess time will tell. [Less]
Posted 2 days ago by Chris Heilmann
As part of the workweek I am currently at I set a goal to give a brownbag on “writing for speaking”. The reasons is that some of the training materials for the Mobile World Congress I recorded were great marketing/press materials but quite a pain ... [More] to speak into a camera reading them from a teleprompter.
For the record: the original text is a good press release or marketing article. It is succinct, it is full of great soundbites and it brings the message across. It is just not easy to deliver. To show the issues and explain what that kind of wording can come across as I took the script apart. I explained paragraph by paragraph what the problems are and proposed a replacement that is more developer communication friendly. You can see the result on GitHub:
The result is an easier to deliver text with less confusion. Here’s a recording of it to compare.
I will follow this up with some more materials on simpler communication for speaking soon. [Less]
Posted 2 days ago by Lawrence Mandel
I’m excited to share that I am stepping into a new role with Mozilla as manager of the Release Management team. Below is an e-mail that my friend and manager Sheila Mooney sent to Mozilla employees last week announcing this change.
Date: ... [More] Fri, 28 Feb 2014 11:19:07 -0800 (PST)
From: Sheila Mooney
To: team Mozilla
Subject: Changes in Release Management
I am happy to share some changes I am making to my team. Effective immediately, Lawrence Mandel will be moving into the role of Manager of the Release Management team. With the Release Managers in tight collaboration with the Project/Program Managers, we can think beyond just keeping the trains running on time and tighten our focus on quality, metrics and process to ensure we are shipping the best possible products to our users. Lawrence's experience inside and outside Mozilla aligns closely with these goals and I am very excited to see what he does with this role!
Lawrence will be transitioning many of his current project management responsibilities to others in my team in order to focus fully on this new challenge. The Web Compatibility Engineers will continue to report to him and Chris Peterson will report to me
Please join me in congratulating Lawrence on his new opportunity!
Tagged: mozilla, release management [Less]
Posted 2 days ago by glob
the following changes have been pushed to bugzilla.mozilla.org:
 add the “Preview” mode for attachment comments
 Make the dashboard white-on-red counter easier to click
 “Your Outstanding Requests” ... [More] emails don’t include superreview requests
 develop a system to track the lifetime of review/feedback/needinfo requests
 all tracking flags are visible on the ‘change many bugs at once’ page
 Create product and affiliations for Intellego project
 grammar issue
 join_activity_entries doesn’t reconstitute text with commas correctly.
 enable USE_MEMCACHE on most objects
 improve instrumentation of bugzilla’s internals
 changing timezone breaks MyDashboard
 increase the mod_perl sizelimit to 700_000 on production
 Fix content-type for woff files
 Comment and Preview tabs need accessibility markup
 Comment textarea has padding:0
 ReferenceError: REVIEW is not defined page.cgi javascipt error when viewing a patch in Splinter
 Please rename Talkilla product to Loop and update User Stories extension
discuss these changes on mozilla.tools.bmo.
Filed under: bmo, mozilla [Less]
Posted 2 days ago
Amazon Pricing - Expensive or Cheap?
Amazon ondemand nodes are fantastic for rapid iteration, but using them in production is expensive naivety. It is expensive for Amazon to maintain spare capacity to allow customers to launch any of the ... [More] wide variety of nodes they offer ondemand. Forecasing demand at Amazon scale can’t be easy. As a result, Amazon recommends that customers buy reserves with an upfront payment then pay a discounted rate after. This is brilliant as it shifts the capacity-planning burden to each customer. This would net us a 60% discount if we we could forecast our AWS usage perfectly.
Fortunately Amazon also has a spot-pricing model. Spot prices can be 70-90% lower than ondemand (we’ve also seen them 50% higher). The downside is that Amazon can kill these nodes at any point and node availability is limited compared to ondemand. Given that Amazon competition can’t match spot prices, Amazon might be selling their unused ondemand capacity at cost. I doubt that anyone smaller than Amazon can maintain their own hardware with salaried ops for less than Amazon’s spot prices.
We spent 2014 retrofitting our c-i architecture to cope with failure so we can run more of our workload on spot nodes.
On our January AWS bill we were 30% more cost-efficient. This was accomplished late in the month, we managed to have the bill not go up to cope with a higher-than-ever load.
For February we were aiming to drop the bill to under $80K.
The following is a summary of where we are.
We now run the majority of our workload on Amazon spot nodes. Ondemand:spot ratio is between 2:1 and 7:1. Note we still pay more for ondemand portion of our bill because ondemand is a lot more expensive
At $74,389.03, our Feb bill is 36% lower than Jan.
Our current AWS spending per job is approximately half of what we paid in December
We now bid on a range of AWS node types to maximize node availability and minimize price. This results in >=50% lower spot bill. We now run a portion of our workload on 2x-faster VMs when cheaper spot machine types are not available.
Our AWS scheduler ramps up slower now to avoid temporary overprovisioning. Note the improvement on the right side of the graph (tall & narrow spikes are bad)
We are evaluating hostedgraphite.com for monitoring our efficiency. It’s nice to have someone offer a well-supported open-source-compatible solution that can cope with 30K+ of metrics our 1000s of VMs generate.
Halved Linux Firefox and Android try build time via a shared S3 object cache
Halved B2G builds via jacuzzis
Mozilla Data Center plans for March
Amazon S3 is cheap, fast and robust. EC2 is incredibly flexible. Both are great for quickly iterating on cool ideas. Unfortunately most of our infrastructure runs on physical machines. We need to improve our non-elastic inhouse capacity with what we learned in the cloud:
Use a shared object cache for Windows/Mac builds. This should more than double Windows build speed. The plan is to use Ceph for S3-compatible shared object storage.
Get OpenStack bare metal virtualization working so we could move as fast there as we do in EC2
Cloud Plans for March
Eliminate EBS usage for faster builds, 10% lower EC2 bill. Amazon EBS is the antithesis of cost-effectiveness.
Deploy more jacuzzis for faster builds, less EC2 instances
Run more things on spot, switch to cheaper ondemand nodes, maybe buy some reserves
Bid on an even wider variety of spot nodes
Probably wont hit another 30% reduction, focusing on technical debt, better metrics, etc
Containerization of Linux builds
Cloud APIs make cost-oriented architectures fun. Batch nature of c-i is a great match for spot.
In general, spot is a brilliant design pattern, I intend to implement spot workloads on our own infra. It’s too bad other cloud vendors do not offer anything comparable. [Less]
Posted 2 days ago by Dave Townsend
Two weeks ago the developer tools teams and a few others met in the Portland office for a very successful week of discussions and hacking. The first day was about setting the stage for the week and working out what everyone was going to work on. Dave ... [More] Camp kicked us off with a review of the last six months in developer tools and talked about what is going to be important for us to focus on in 2014. We then had a little more in-depth information from each of the teams. After lunch a set of lightning talks went over some projects and ideas that people had been working on recently.
After that everyone got started prototyping new ideas, hacking on features and fixing bugs. The amount of work that happens at these meet-ups is always mind-blowing and this week was no exception, even one of our contributors got in on the action. Here is a list of the things that the team demoed on Friday:
Joe Walker showed off CSS and JS code coverage tools.
Rob Campbell showed us status codes and image previews in the network monitor as well as Mihai’s patches for nicer output and cd in the web console.
Anton Kovalyov demonstrated a sourceeditor add-on that embeds metrics and contextual information into the code you see.
Alex Poirot showed UI to select which frames in the page or Firefox OS app to monitor in the developer toolbox.
Brandon Benvie amazed us with autocompletion and type inference using tern.
Irakli Gozalishvili developed a plugin for Light Table that allowed it to connect to Firefox through the debugging protocol.
Victor Porof made game developers happy with a demo of a canvas tracer.
Patrick Brosset made the inspector show badly formed style entries and synced changes between the inspector and style editor.
I made the box model view in the inspector editable for on the fly layout tinkering.
Mike Ratcliffe added events and font tooltips to the inspector as well as creating a test log syntax highlighter for Sublime Text.
Jordan Santell created a tool for viewing, editing and debugging web audio nodes.
Erik Vold made Itchpad able to live edit add-ons.
Brian Grinstead improved itchpad with drag and drop file manipulation and file/folder watching.
Jeff Griffiths showed off a telemetry dashboard.
One of our contributors, Optimizer, dialled in to show off his storage inspector.
Panagiotis Astithas played with some widgets for a small live view of Firefox’s performance.
Will Bamberg created an add-on to add tooltips to CSS rules in the inspector which showed documentation from MDN.
Jan Keromnes showed more metrics and settings for the developer HUD in Firefox OS.
Soledad Penades created app templates to help new developers get started quickly.
Ryan Stinnett wirelessly debugged of Firefox OS with autodiscovery.
Jan Odvarko showed off Firebug debugging dynamic scripts like eval.
Eddy Bruel demonstrated some of his work on making workers debuggable.
Mateo Ferretti showed us a game that you play by using the developer tools.
This only covers the work demoed on Friday, a whole lot more went on during the week as a big reason for doing these meet-ups is so that groups can split off to have important discussions. We had Darrin Henein on hand to help out with UX designs for some of the tools and Kyle Huey joined us for a couple of days to help work out the final kinks in the plan for debugging workers. Lot’s of work went on to iron out some of the kinks in the new add-on SDK widgets for Australis, there were discussions about memory and performance tools as well as some talk about how to simplify child processes for Firefox OS and electrolysis.
Of course there was also ample time in the evenings for the teams to socialise. One of the downsides of being a globally distributed team is that getting to know one another and building close working relationships can be difficult over electronic forms of communication so we find that it’s very important to all come together in one place to meet face to face. We’re all looking forward to doing it again in about six months time. [Less]