CRAP Talks – What We’ve Learned So Far
CRAP Talks - 7,8,9 Summary
As of CRAP 7 we have been making feedback a central part of how we review the night’s event and improve future CRAP Talks. Each event is followed by an eager refresh of our feedback tool, Usabilla, to see the audience’s reaction about the night’s talks and the general atmosphere. This has led to some great qualitative feedback - which we felt was the right time to share!
We have had 55 respondents so far, which whilst not being a significant sample size has already given us lots of great insights about how the events should be run moving forward.
So, what do users want?
In short, a bit of everything - looking through feedback we can see that CRAP attracts an audience of varying technical ability and commercial focus. This makes for an interesting blend of talks and discussions around the room.
The main pillars of the feedback are Analytics and Product - where one is prioritised, users request the other in the feedback and vice versa! CRO is a constant presence too, but this is not the main thing users are requesting as it is usually covered in some way.
Out of a total 55 Respondents, only 2 people have said they didn’t learn anything new from coming to CRAP. Not bad!
With an average mood rating of 4.8 we’re doing a lot of things great - it isn’t always easy to curate content that speaks to all levels of technicality and seniority – but we’re grateful to have had such fantastic speakers whose talks have resonated with almost all attendees!
As we build up our sample of CRAP data, we will be expanding our surveys to understand more about our visitors and what they would like to see at future events. My main takeaway looking at the feedback is what we can action as a team from what users tell us - at the moment, we’re getting an interesting playback of the key things that each user has learned from each event, and a good indicator of general sentiment. But (and this is feedback for myself in the next survey) we want to start asking more aspirational questions from the audience so we can get richer feedback on what we should be doing at future events.
CRAP 7
17 Respondents
Desktop - 83%
Mobile - 17%
Opened my eyes a little bit to data manipulation |
Learning how product teams are organised in companies |
A strong strategy on scaling a product team |
How to do geo magic with Alteryx |
Structure / motivations and goals of product teams |
Some great tips on how data can be visualized through different lenses and ow to scale a team |
I'm not a data analyst but John's talk was incredibly engaging so just fun to learn about something that I don't touch on in my day job. |
Alteryx and data visualisation are awesome |
Repotting plants can be tricky 😉 I especially loved Alice's talk about product and product team development. And I'm totally buying Alteryx...if I can ever afford it 😉 |
"We need to define what 'better' means" |
Use of spatial analytics, product team building |
About spatial data |
To think about our data in new ways. To try more brutal A/B tests. To always re-evaluate team structure regularly. |
Lots of context about the growth patterns of product teams. |
Spatial analysis and how to "re-pot" a product team in an S(perhaps M)E |
Justifications for not throwing an AB test at every argument |
CRAP 8
21 Respondents
Desktop - 24%
Mobile - 76%
More about AB testing |
that I might have been on the religion side of data, that defining processes and sticking to them is critical |
How to distinguish religion from science 😉 |
Vision and strategy of a high efficient test culture; debate on bias of data |
How other businesses implement CRO |
That all data is biased. To think outside the box. The power of resource and budget. |
About how Hotels.com Work with CRO, and about data science and how important it is to remember about sociological science too |
Analysing test program metadata is a good idea I've thought about before but was inspired to implement after Arina's talk. Also great to see people questioning the validity of traditional analytical methods/processes (Jonny/Shaun) |
How other companies do things and challenges to conventional theories |
Test with integrity |
All about slam poetry. |
I learned that hotels.com apply a “learn rate” to measure that they have found learning and an “inconclusive rate” to measure that they are learning from the projects they choose |
To make use of the abundant amount of human psychology research out there. Instead of spending time AB testing everything when the research had already been done. (From Shaun’s talk!) |
How large client side teams work, their processes and vision. |
That we focus on the wrong things during ab testing and we should have a more robust strategy when it comes to how we test, for how long and data analysis |
To be suspicious of experimentation evangelicals |
CRAPsters are good-natured |
Importance of Automation of results |
To question what data is |
CRAP 9
17 Respondents
Desktop - 6%
Mobile - 94%
I liked the learnings from the stock marketing for measuring potential |
The good questions Joanna asks her colleagues. |
Finally! A proper ai definition. Seems as if the tech bubble is just about buzzwords. |
Understanding how to be honest about the value that your work actually may bring before anything gets built. |
product management cycle |
About customer features at Just Eat, and the end of the world |
Daniels talk was very knowledge and inspiring in terms of industry trends and where AI is headed. The stuff around kpi contribution and how JustEat use machine learning was insightful. |
That the AI super brain will destroy the world |
Metrics that Simon evaluates, Interleaved a/b testing in recommendations, Difference between AI and automation, etc. |
ICE framework, What AI really is (and isn’t) |
Ai definition and problems. Other talks elaborate on things I heard about but it was good nevertheless |
Lots about how other companies approach product and analytics |
Everything seemed really applicable, but probably the 3 key questions in the first talk |
The end of the world is nigh! |
Miles Baker