Some years ago, I worked for a big box retailer on their public-facing catalog api. It was a pretty exotic thing when it came out, as that data was all trade-secretive - which was dumb, because then they just had web scrapers brutalizing their front end to get the data that was public anyway.
So on the back end, the api collected data from many different sources within this very large company, and put it in nice api form. This was technically cached data, and could get stale, but really, most of it was sufficiently up-to-date most of the time.
Turned out the biggest customer for this data was the company itself. Lots of individual projects, needing some obscure piece of data or another, could get it from the api, rather than having to hunt for whatever department actually owned the data, and working out access arrangements on a piecemeal basis. I'm sure it saved countless thousands of hours of work, and greatly improved both quality and time to market for projects.
Looks like maybe IKEA could use something like that...
That's funny, for a few years I worked for an organization that did the front end crawling / brutalizing of most of the major retailers for their data. Afaik most still do not intentionally provide any data publicly and many spend extravagant amounts on anti-crawling tech to prevent it, resulting in an expensive arms race over ostensibly public data.
Yeah, it's just dumb, pseudo-paranoid thinking. Crawlers are going to get it anyway, and it's publicly available, so why not make it easier for everyone?
And like I said, our biggest customer was our own company. It saved an incredible amount of hassle.
If you put something on a public site it can be harvested, and there's really no stopping that. It's like video DRM: you can spend billions on the tech, but ultimately, if nothing else, anyone can still aim any camera at the screen and make a copy.
There's only two reasons I can think of that companies would bother with this:
1) they are ignorant or delusional about it and think they can beat the crawler (which could be a third party making money by convincing them they can do it)
2) It forces the crawler to spend more money chasing their blocking
I'm not sure what the benefit of 2 would be, though, unless you could make them spend so much money that their entire operation is unprofitable.
Aside, as a developer, it might actually be entertaining for a while to work on (2) - constantly changing strategy to defeat scrapers - but I can also see getting burned out pretty quickly.
>Aside, as a developer, it might actually be entertaining for a while to work on (2) - constantly changing strategy to defeat scrapers - but I can also see getting burned out pretty quickly.
Well if you replace 'scrapers' with 'advertisements', there are plenty of worthwhile OSS projects that might welcome help :)
Making that sort of business unprofitable might not be a super realistic goal, though.
> My search for a bookcase starts on ikea.com, where I navigate to the bookcase section, in the hope to get an overview of the different series. I find instead a listing of all 257 bookcase products, including accesories, such as hinges and extra shelves.
This is a very common frustration I have. Eg, go search Amazon for "bike" and sort by price "low to high". None of the top hits are bicycles; they are all accessories.
ML seems to be about guessing what the user wants based on lots of data analysis. But I think there are lots of cases where letting the user tell you want they want, by providing a hierarchical menu or explicit filters, would get much better results.
One "low tech" thing I wish someone would build: an online grocery store with every product categorized by food restrictions (gluten-free, peanut-free, sugar-free, low-sodium, etc, as well as kosher, halal, vegan, etc). Check the boxes you care about in your profile and you'll only see things that match. This would be AMAZING for folks with food allergies. The implementation would be 99.9% up-front research and data entry; the actual filtering would be as simple as WHERE clauses. But I don't know of a site that does this.
This type of "low tech" idea seems like the kind of thing Google, Yelp, and all the other big companies should be doing a better job with.
I have celiac and presumably Google has figured that out by now (or could, if they spent some time determining allergies for accounts). Yet in Google Maps I have to type "restaurant gluten free" when looking for food options. Then I have to poke around to see which might be legitimately gluten free. I've been typing this in for a decade, and never has Maps learned this basic time-saving assumption it should make.
Same with Yelp. It hasn't figured out that I always filter by gluten free and search reviews for that. It never just makes gluten free restaurants prominent, nor follows up with me the next day on whether it made me ill or things like that.
As you say, it's baffling the simple filtering that could be done based on known user information. Note that my story is about gluten free, but would apply to any number of food allergies or dietary preferences, yet the "market leaders" seem unable to innovate on this specificity.
This sort of comment just makes me despair, because you seem to be complaining about Google not taking enough control of determining what you want, and I see that as a tremendous problem (the problem) what with their attempts already.
Here's an example of what I hate about Google. I searched for recipes "without eggs". It gave me hundreds and hundreds of hits for "without eggs or butter". Now that may be what most people want, sure, but it's eradicated the possibility of other permutations. When you guess what people want to search for, you remove the ability to search for nearly every other possibility imaginable.
i think this comes back to why diversity in tech is a good thing. by which i mean diversity of background/life experience, but of course that's hard to measure...
that, or listening to customers. yes, the signal to noise ratio is high, but excellent sales people are great at that. i assume this isn't done because ads are the product, not the website itself
I think the problem with this (at least on ebay, where it's also present) is that they rely on the seller to set the metadata, and the seller is generally more interested in getting more people to look at their item then being correct.
If a seller wants greater coverage, it's beneficial to deliberately mis-list their item so more searches return the item. As long as the site hosting the listings doesn't aggressively police their listings, doing this will provide a net-benefit to them.
On the other hand, it corrodes the overall utility of the e-commerce site, but that's hard to measure, particularly if the site has had a laissez faire attitude to listing accuracy so far. I mean, I assume (admittedly a guess) that ebay/amazon would do better if they required accurate listings, but to get from the way it is to that would require a LOT of investment and moderation, and that would be a very hard sell.
I moved coasts and renovated an old house last year. And consequently I've spent a LOT of time on Ikea's website and at our local store.
I'm stunned at how terrible their online experience is. The parent article summarizes a bunch. But Ikea misses out on basic blocking and tackling ecommerce items like product descriptions, dimensions, related products in the collection, inventory levels (not always accurate), etc.
It makes me wonder how much money they're leaving on the table with that experience.
But Ikea misses out on basic blocking and tackling ecommerce items like product descriptions, dimensions, related products in the collection, inventory levels (not always accurate), etc.
Ikea's site has all that product information and more for most of the products in the search listing - but you do have to hover over things to see it.
What is quite interesting is the variation between the product pages between different countries.
The Japanese site is much more 'rounded' than the US site, and the UK site has most of the information hidden away in an accordion UI. Presumably they've tested and found each variation works best in each region. I'd love to see their metrics.
> Presumably they've tested and found each variation works best in each region. I'd love to see their metrics.
Yeah, me too. Things like these make me think a lot of those metric-based decisions amount to reading tea leaves. I know there are differences between regions, but you wouldn't expect that many of them in the case of efficiently presenting information. Unless of course, efficiently presenting all necessary information to customers isn't what's being optimized for.
> Ikea's site has all that product information and more for most of the products in the search listing - but you do have to hover over things to see it.
My issue is with completeness. Not every product has the basic information I'd need to decide whether to purchase it or not. Which means I don't purchase those products. Which means lost revenue for Ikea.
For a multi-national operating entity, local customisation and even their entire website is often delegated to the local, legal entity itself.
So, while some of this might be intentionally done, like you suggest, because of metrics, I suspect in most cases, the local development unit built their own product page, because they had to.
On mobile all three of them look pretty much the same however, and at least for this product all three of them has information about dimensions easily accessible on mobile in an expanding section.
oh, it gets worse. i made an account, bookmarked a few things. a few days later i try and order them. at the last step of checkout, an "unknown error occurred". as far as i know, that account is still unable to check out.
how did i get around it? by logging out and not using an account/checkout as guest. (obviously, i would have preferred to not use them, but the missus wanted that one particular item.)
but can you imagine having such a major, heavy bug at the last/crucial point of your funnel? the whole thing sucks, but then so does their home delivery, so i'm not sure how much money they make off their website vs in-store.
I would assume they are leaving a decent chunk on the table. I've never bought something from Ikea online because of how terrible the experience is. I can think of two times when I found something close enough to what I wanted at a competitors store online because I didn't want to buy online at Ikea, and wasn't going to be close enough to one of their stores.
My search for a bookcase starts on ikea.com, where I navigate to the bookcase section, in the hope to get an overview of the different series. I find instead a listing of all 257 bookcase products, including accesories, such as hinges and extra shelves. The full listing not give me an overview, however, and I do not find anything useful.
An impatient customer could have given up at this stage
This is thinking from the perspective of someone who can take advantage of a logically structured site. I can tell you that when engineers think this way, the result is usually serviceable as a first pass, but product comes back with requests that make you rethink everything. To play devil's advocate, I can guess the goal with this first page of results might be to show enough different kinds of results that users will keep pressing "Load More" until they find what they want. Imagine somebody comes to the site for parts for their bookshelf, searches for "bookshelf," and sees nothing but complete bookshelves in the first page of search results. They aren't going to scan the page for links to the product category they're looking for. They aren't going to "Ctl-F parts" like an HN reader. They might not even search for "bookshelf parts." They might give up right there. So product tells us we have to make sure that the first page of search results contains some bookshelves, some parts, some accessories, at least one result from each product category. That way the users will be enticed to mash "Load More" again and again until they see what they want.
Or maybe it's some other quirk of behavior that makes this design better. Or maybe they've put no effort into the design at all. I wouldn't presume to know, but on balance my money would be on engineers already having offered to make changes like the author is requesting, and product/UX rejecting those changes in favor of the current version.
> This is thinking from the perspective of someone who can take advantage of a logically structured site. I can tell you that when engineers think this way, the result is usually serviceable as a first pass, but product comes back with requests that make you rethink everything. To play devil's advocate, I can guess the goal with this first page of results might be to show enough different kinds of results that users will keep pressing "Load More" until they find what they want.
In my eyes this is a variation of the HN classic: my users are so stupid I need to remove or break stuff that would be useful for 99% of my userbase.
my users are so stupid I need to remove or break stuff that would be useful for 99% of my userbase
Yes, that's what engineers often sound like when we try to interpret the conclusions of usability professionals. We get frustrated because the features that we would find easiest to work with are not necessarily the features the product should have. I'm not saying I'm a UX professional who does know better than engineers; I'm just saying that as engineers we should get used to the fact that what is horrible design from our perspective is often the result of careful work done with a more representative sample of users.
Still I suspect people are applying this out of context:
When people are designing business applications, ecommerce software etc using ideas from Google and Twitter, companies that are trying to eke out the last percent of usability, and in the process destroys their product for everyone else I don't think it is smart.
Case in point: Google has lost me and I guess a number of other engineers as users and evangelists because their software just doesn't cut it anymore after 10 yeas of dumbification as it now too often refuses to respect even double quotes and verbatim setting.
Google will now take a three or four word search and remove precisely the critical phrase that makes it useful at all. Like, I was looking for "New York XYZ" and it cheerfully spits out top hits with "New York" crossed out.
Given how many Googlers are here on the forums I assume Google is well aware of the problem and we are just collateral in the quest to make Google usable for cats and dogs ;-)
Perhaps I'm just too much of an engineer. But I'm finding it hard to imagine how anyone would find it easier rather than harder to choose a bookcase when IKEA's search results include media units, shelves, doors for kitchen cupboards, hinges for doors for kitchen cupboards, and doll furniture.
To be fair, the experience is as confusing and distracting as wandering through a store, so perhaps it really is deliberate.
> We get frustrated because the features that we would find easiest to work with are not necessarily the features the product should have.
"Product should have" implies a direction in which you want to optimize. Best user experience? Maximum profitability?
There's a reason supermarkets tend to shuffle their products around all the time - people need to search for stuff again, so they might see things they otherwise won't see, and buy more. Shitty UX, better profit.
A lot of IKEA's products are highly modular and you have to buy the individual parts to create a full product. Their pricing strategy is to set base products relatively cheaply, but the have higher markups on additional accessories for customization.
This does create design challenges that other furniture stores don't have to deal with.
This seems like a common side-effect of an overly segmented microservice architecture.
I’ve noticed this pattern numerous times, my guess is service/feature teams want to avoid at all costs taking another service dependency (and all the testing and coordination that entails) so they just punt.
Sometimes a big old RDBMS housing all of your data which is actually already related in reality makes the most sense.
This. A web shop with carts/checkout, inventory, search etc should be one monolith service over one database - even if your operation is a global megacorp. Do try to split it into microservices all you want, but if anyone notices (such as having to enter something twice) you failed.
That will only work if the entire "global megacorp"'s database is the web shop database that you describe. How else are you going to show availability of products in a shop nearby you? How are you going to combine the webshop's delivery logistics with the distribution centers of the entire brick&mortar retail chain? How are you going to expose loyalty card benefits online? How are you going to keep the product catalog in sync with what's in stores?
Like all retail bigcorps, I assume that IKEA has lots of internal systems, half self-cooked, half based on SAP or Oracle or whatnot, that together form the entireity of their operations. These formed long before "microservices" were hip and trendy and will be around long after the hype has died away. The webshop is a part of all that and it must be closely connected to the rest of the IT in lots and lots of places.
So, what you're suggesting is "IKEA should just rebuild their entire product, sales, supply&demand, logistics and finance IT from scratch in a single monolith on a single database". That'd be a fine end-result indeed, but unachievable on any timescale.
They don’t need all IT in one service but the web shop needs at least a view of accounts, inventories and so on. That db can be a cached view which means it behaves as a single db with staleness issues (changed passwords or preferences may be stale, you may be able to add things to a cart that appears to be in stock but really is out of stock in the true inventory etc.).
It’s better to have such a poorly working illusion of a single db than to have no such illusion at all which means having to log in multiple times etc.
> Sometimes a big old RDBMS housing all of your data which is actually already related in reality makes the most sense.
Which is all fun and games until some intern gets asked to add a 'most popular' sort to the search page. He quickly figures out where the 'OrderProduct' table is, throws in a quick count and a group-by to the search query and the whole site gets taken down by it.
Of course I take your point but there is somewhere between everything in one monolithic DB and every bit of data has its own microservice. It needs good judgement to divide it appropriately.
That's why you do stress testing with production data...
That I know the store won't do. But then, if they are so large that it's a crisis, they have no excuse, and if they are small enough to have an excuse, they quickly revert to the last version and try to discover what went wrong.
I'd say it's more bad UX testing and bad project management over implementation or infrastructure. A well designed feature-set can be architected a nearly infinite number of ways.
Some architectures make it easier to combine data than others. In one big database it’s relatively easy and quick to look at data from a different angle. If your data is distributed over several systems it’s much harder and takes longer to implement.
What if I told you that you could be hobbled by inconsistent, scattered state in a system not expicitly designed to be distributed. There's no point in jumping to the conclusion that they are using a design you're familiar with.
Most anybody is smart enough to understand anything, they just typically aren't knowledgeable enough to grasp it quickly. Stupidity, in the other hand, is misapplied intelligence. You willfully go the opposite direction from the proper one. AI is ignorant if it hasn't been trained, stupid if it is trained but has insufficient direction.
> Most anybody is smart enough to understand anything
I don't think this is true. At least for me it's not. In general I pick things up pretty quickly, at least above average. But there are certain things I know I've tried extremely hard with but I really struggle to even be competent. I also know some people which I'd consider a little slow and not from a lack of effort, but they have amazing abilities in other areas which I suck at.
One of the more interesting things I've read recently is that there's a certain strain in math where a concept isn't considered to be understood until it can be taught to an undergrad math class. Most anybody can make it to an undergrad level of academia.
Meaning the world as a whole doesn't understand something unless there exists one person who can explain it at a level that anybody can reasonably attain. Once this happens, it's only a matter of replication.
The domain of knowledge is far flatter and the role that intelligence plays is way smaller. You need to be really smart if you want to make advances in the knowledge of mankind. Not to understand what's already been discovered. If it's already been discovered and understood, then you can't fault intelligence for not providing you with the understanding.
"You just didn't luck into taking the right class."
I disagree. A brilliant teacher is someone who can make almost everybody feel like they understand advanced concepts. For example, Richard Feynman and his book on QED. That doesn't mean even a genius can get most of them to actually understand.
Mainly because of Wikipedia and its frequent lack of pandering to a lay audience for mathematical topics, I've come face to face with how impossible it is for me to understand things at a certain level. I could do undergrad calculus, but it's easy to find topics where I feel like I have a mental disability, or I'm a different species from the people who understand it. It doesn't seem like a difference of degree but of kind.
I submit that the topics you feel you can't learn, you only think that way because you don't have the proper insights. If someone could have given those to you, then you'd be able to understand them.
And if someone truly understood those topics, then they could give you the proper insights.
So either insight itself is broken, in that there are certain topics which are immune to it, in that you or anyone could never learn them given the right teacher and the right setting and the right approach, or, well, that's the only option really. The power of insight fails because some minds are patently immune to it.
Many of these problems could be identified by simple corridor testing of the whole purchasing life cycle. It’s not glamorous, but grabbing a few ordinary people and asking them to run through a workflow will identify pain points like this. Getting the low hanging fruit here would make the experience vastly better than most online stores.
I think the author takes for granted the amount of work involved in software engineering. I work for a startup who's product has glaringly obvious flaws. But someone has to do the work of writing code to improve upon existing features. And there are a zillion other things that a business can choose to prioritize over improving an existing feature that gets the job done.
Breaking news: e-commerce site's search is horrendously broken.
This is why folks like me have a job. Search is hard. Ikea adding ML here would just cause more problems and add complexity. They just need a relevance engineer.
My latest complaint: I am driving down the highway. I want Starbucks. I open Google Maps on the phone I have mounted on the dash. I tap "star" and get a list of Starbucks location. What's this? The first one in the list is 15 miles behind me. I'm pretty sure there is one a mile ahead, but I can't take my eyes off the road long enough to find it. I try "Ok Google" but it also doesn't take into account the direction I am heading.
The author indicates it would be good for data to flow around but sometimes data pre-population is annoying or uncanny.
If I'm 1500 miles from my home visiting a friend, populating my home zip code would be frustrating. The fact that LG knew my name when I went to register even though I had not logged in, that is unsettling.
So on the back end, the api collected data from many different sources within this very large company, and put it in nice api form. This was technically cached data, and could get stale, but really, most of it was sufficiently up-to-date most of the time.
Turned out the biggest customer for this data was the company itself. Lots of individual projects, needing some obscure piece of data or another, could get it from the api, rather than having to hunt for whatever department actually owned the data, and working out access arrangements on a piecemeal basis. I'm sure it saved countless thousands of hours of work, and greatly improved both quality and time to market for projects.
Looks like maybe IKEA could use something like that...