• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

5 Circles Research

  • Overview
  • SurveysAlaCarte
  • Full Service
  • About
  • Pricing Gurus
  • Events
  • Blog
  • Contact

Pricing Gurus and 5 Circles Research Blog Posts

Hyatt’s “random acts of generosity” – good idea or off target?

Sunday’s New York Times Magazine has an article about a new program being introduced by the Hyatt hotel chain intended to stimulate real loyalty in the form of future business through gratitude generated by generous acts such as having a bar tab waived randomly.

It isn’t totally clear how closely the new program is associated with the Hyatt’s Gold Passport loyalty program.  The Times article states that recipients don’t have to be members, but Mark Hoplamazian (Hyatt C.E.O) writes in a guest blog post for USA Today that the “random acts of generosity” program is being run by the Gold Passport team.

It is certainly clear that current loyalty programs are generally poor performers in terms of creating grateful customers whose relationship extends much beyond treating the loyalty card as a discount program. And I buy into the notion of gratitude as a powerful motivator.  But I’m not so sure that Hyatt’s plan will be able to walk the tightrope necessary to achieve their objectives.

  • The idea of randomness is troubling to me, in part because I wonder how well it will be applied in practice.  Will a customer receiving a free massage see the gift in a positive light, or be suspicious?  Will someone else who doesn’t receive a “random act of generosity” perceive unfairness?  In a planned paper on gratitude, the  importance of elements of randomness or discretion is mentioned.  Perhaps the giveaways will become merely discretionary, used as ways to appease an unhappy customer, or be perceived as such.
  • I’m also thinking of the random aspects of B.F. Skinner’s operant conditioning.  Is this what’s intended – to generate a feeling among customers that they should return because they might be the recipient of benefit next time (much like the dog who doesn’t know when they’ll receive a treat for good behavior).  If that’s the case, perhaps it would be better to be upfront with a truly randomized system.  That approach worked well for a funky burger joint in Portland, Oregon, where the possibility of a free meal was part of the schtick, but it could backfire for the Hyatt if customers simply see it as a different way to apply discounts (and perhaps would prefer lower prices).
  • Hyatt is in a bind on how to publicize the program.  On the one hand, if they promote the new program actively, they might be seen as doing this for very self-serving purposes.  Of course, that’s their intent, but they don’t want it to be obvious.  On the other hand, will word-of-mouth pay off quickly enough, or be accurate?
  • Perhaps a simpler approach would be instead to emphasize the aspects of service that don’t have as direct an impact on the consumer’s wallet.  The Times article mentions Zappos’ ability to generate gratitude by helping shoppers find a product that Zappos doesn’t have in stock.  Some of my most positive experiences of hotels, and the ones I’ll use for recommendations, are for places that go above and beyond to provide suggestions for local services, or advice for a future stay.  Perhaps Hyatt thinks that tactic has run its course?

For more information on research into the role of gratitude in relationship marketing, look for  “The Role of Customer Gratitude in Relationship Marketing“, by Robert W. Palmatier, Cheryl Burke Jarvis, Jennifer R. Beckhoff, & Frank R. Kardes, in the Journal of Marketing.

Hyatt’s goal should be to be seen as a chain that offers a better experience for all customers, not just the lucky few. Will the “random acts of generosity” program hit the mark?  It remains to be seen.

Idiosyncratically,
Mike Pritchard

Filed Under: News, Published Studies

SurveyTip: Get to the point, but be polite

A survey should aim to be like a conversation.  Online surveys don’t have humans involved to listen to how someone feels about the survey, to reword for clarity or to encourage, so you have to work harder to generate comfort.  Although you don’t want to take too long (the number one complaint of survey takers is time), it is still better to work up to the key questions gradually if possible.  Even though it might be the burning issue for you, you risk turning someone off if you launch straight into the most important question. A few preliminary questions should also help put the respondent into the right frame of mind for the topic.

Generally, the best approach is to build up the intensity, starting from less important questions and then moving to the critical questions as quickly as possible, building up the survey taker’s engagement as you go.  Then reduce the intensity with clarifying questions and demographics.  That way, if someone bails out early, you’ll still have the most important information (assuming that your survey tool and/or your sample company allow you to look at partial surveys).

There are exceptions of course, and one comes from the use of online panels, particularly when you set up quotas and pay for completed surveys.  In this case, one or more demographic questions, used for screening, will be placed very early. 

Or sometimes the topic of the survey dictates the order, as with awareness studies where unaided awareness is usually one of the first questions.  You might also order the questions based on the survey logic. 

If you need to include a response from an earlier question in a later question (piping), or if the answer to one question will determine which other questions are asked (skip logic), this may impose a question order. 

For complex surveys, there are likely to be tradeoffs that are best decided by careful review of the questionnaire (as a document) before starting programming.  This is why questionnaire writing is a combination of experience and science with a little bit of guesswork thrown in for good measure.

One example of how a softer start helped was a survey for an organization considering new services.  The original questionnaire launched straight into the questions for the new services after a brief introduction.  Responses trickled in slowly.  When a question about membership in the organization was moved up to the beginning, the response rates jumped and we were able to complete the survey on time.

If you show respect for your survey takers, they’ll appreciate it and they’ll reward you by completing the entire survey.  Good luck!
Mike

Filed Under: Methodology, SurveyTip

Today’s tortured questionnaire wording

I just have to share this in the hope that a reader will be able to enlighten me.  What could this possibly mean?

Not a provider that I would think of at first, but I probably would not consider it

OK, let me give some context. This is from a survey on business internet services. The researcher wants to know what would be my likely consideration for each of several providers if I’m choosing a new one. The choices are as follows:

  • The only provider I’d ever consider
  • One of the providers I’d consider above others
  • One of the providers I’d consider above others
  • Not a provider that I would think of at first, but I might consider it
  • Not a provider that I would think of at first, but I probably would not consider it
  • A provider I would never consider

If I think about it, especially with the ordering they’ve offered, I guess the research company wants to know if I would be unlikely to consider it (somewhere between “might consider” and “would never consider”).  But was there an actual phrase that they were trying to come up with?  Beats me.

It’s hard to tell whether they are losing any useful data from this poor question wording, other than running the risk of respondents terminating from confusing.

I saw this issue 11% of the way through the survey, so I wondered how bad the rest would be.  Fortunately there were no other major problems.

Idiosyncratically,
Mike Pritchard

Filed Under: Questionnaire

Van Westendorp pricing (the Price Sensitivity Meter)

This is a follow up to classes I taught that included a short section on pricing research methodologies. I promised some more details on the Van Westendorp approach, in part because information available online may be confusing, or worse. This article is intended to be a practitioner’s guide for those conducting their own research.

First, a refresher. Van Westendorp’s Price Sensitivity Meter is one of a number of direct techniques to research pricing. Direct techniques assume that people have some understanding of what a product or service is worth, and therefore that it makes sense to ask explicitly about price. By contrast, indirect techniques, typically using conjoint or discrete choice analysis, combine the price with other attributes, ask questions about the total package, and then extract feelings about price from the results.

I prefer direct pricing techniques in most situations for several reasons:

  • I believe people can usually give realistic answers about price.
  • Indirect techniques are generally more expensive because of setup and analysis.
  • It is harder to explain the results of conjoint or discrete choice to managers or other stakeholders.
  • Direct techniques can be incorporated into qualitative studies in addition to their usual use in a survey.

Remember that all pricing research makes the assumption that people understand enough about the landscape to make valid comments. If someone doesn’t really have any idea about what they might be buying, the response won’t mean much regardless of whether the question is direct or the price is buried. Lack of knowledge presents challenges for radically new products. This aspect is one reason why pricing research should be treated as providing an input into pricing decisions, not a complete or absolute answer.

Other than Van Westendorp, the main direct pricing research methods are these:

  • Direct open-ended questioning (“How much would you pay for this”).  This is generally a bad way to ask, but you might get away with it at the end of a in-depth (qualitative) interview.
  • Monadic (“Would you be willing to buy at $10”). This method has some merits, including being able to create a demand curve with a large enough sample and multiple price points. But there are some problems, chief being the difficulty of choosing price points, particularly when the prospective purchaser’s view of value is wildly different from the vendor’s. Running a pilot might help, but you run the risk of having to throw away results from the pilot. But if you include open-ended questions for comments, and people tell you the suggested price is ridiculous, at least you’ll know why nobody wants to buy at the price you set in the pilot. Monadic questioning is pretty simple, but it is generally easy to do better without much extra work.
  • Laddering (“would you buy at $10”, then “would you buy at $8” or “would you still buy at $12”). Don’t even think about using this approach, as the results won’t tell you anything. The respondent will treat the series of questions as a negotiation rather than research. If you wanted to ask
    about different configurations the problem is even worse.
  • Van Westendorp’s Price Sensitivity Meter uses open-ended questions combining price and quality. Since there is an inherent assumption that price is a reflection of value or quality, the technique is not useful for a true luxury good (that is, when sales volume increases at higher prices). Peter Van Westendorp introduced the Price Sensitivity Meter in 1976 and it has been widely used since then throughout the market research industry.

How to set up and analyze using Van Westendorp questions

The actual text typically varies with the product or service being tested, but usually the questions are worded like this:

  • At what price would you think product is a bargain – a great buy for the money
  • At what price would you begin to think product is getting expensive, but you still might consider it?
  • At what price would you begin to think product is too expensive to consider?
  • At what price would you begin to think product is so inexpensive that you would question the quality and not consider it?

There is debate over the order of questions, so you should probably just choose the order that feels right to you. We prefer the order shown above.

The questions can be asked in-person, by telephone, on paper or (most frequently these days) online survey. In the absence of a human administrator who can assure comprehension and valid results, online or paper surveys require well-written instructions. You may want to emphasize that the questions are different and highlight the differences. Some researchers use validation to force the respondent to create the expected relationships between the various values, but if done incorrectly this can backfire (see my earlier post). If you can’t validate in real-time (some survey tools won’t support the necessary programming), then you’ll need to clean the data (eliminate inconsistent responses) before analyzing. Whether you validate or not, remember that the questions use open-ended numeric responses. Don’t make the mistake of imposing your view of the world by offering ranges.

Excel formulae make it easy to do the checking, but to simplify things for an eyeball check, make sure the questions are ordered in your spreadsheet as you would expect prices to be ranked, that is Too Cheap, Bargain, Getting Expensive, Too Expensive.

Ensure that the values are numeric (you did set up your survey tool to store values rather than text didn’t you? – if not another Excel manipulation is needed), and then create your formula like this:

IF(AND(TooCheap<=Bargain,Bargain<=GettingExpensive, GettingExpensive<=TooExpensive), OK, FAIL)

You should end up with something like this extract:

ID

Too Cheap

Bargain

GettingExpensive

TooExpensive

Valid

1

40

100

500

500

OK

2

1

99

100

500

OK

3

10

2000

70000

100

FAIL

4

0

30

100

150

OK

5

0

500

1000

1000

OK

Perhaps respondent 3 didn’t understand the wording of the questions, or perhaps (s)he didn’t want to give a useful response.  Either way, the results can’t be used.  If the survey had used real-time validation, the problem would have been avoided, but we might also have run the risk of annoying someone and causing them to terminate, potentially losing other useful data.  That’s not always an easy decision when you have limited sample available.

Now you need to analyze the valid data.  Van Westendorp results are displayed graphically for analysis, using plots of cumulative percentages. One way is using Excel’s Histogram tool to generate the values for the plots. You’ll need to set up the buckets,so it might be worth rank ordering the responses to get a good idea of the right buckets.  Or you might already have an idea of price increments that make sense.

Create your own buckets, otherwise the Excel Histogram tool will make its own from the data, but they won’t be helpful.

Just to make the process even more complicated, you will need to plot inverse cumulative distributions (1 minus the number from the Histogram tool) for two of the questions. Bargain is inverted to become “Not a Bargain” and Getting Expensive becomes “Not Expensive”.  Warning: if you search online you may find that plots vary, particularly in which questions are flipped. What I’m telling you here is my approach which seems to be the most common, and is also consistent with the Wikipedia article, but the final cross check is the vocalizing test, which we’ll get to shortly.

Van Westendorp example chart
Van Westendorp example chart

Before we get to interpretation, let’s apply the vocalization test.  Read some of the results from the plots to see if everything makes sense intuitively.

“At $10, only 12% think the product is NOT a bargain, and at $26, 90% think it is NOT a bargain.”

“44% think it is too cheap at $5, but at $19 only 5% think it is too cheap.”

“At $30, 62% think it is too expensive, while 31% think it is NOT expensive – meaning 69% think it is getting expensve” (Remember these are cumulative – the 69% includes the 62%). Maybe this last one isn’t a good example of the vocalization check as you have to revert back to the non flipped version. But it is still a good check; more people will perceive something as getting expensive than too expensive.

Interpretation

Much has been written on interpreting the different intersections and the relationships between intersections of Van Westendorp plots. Personally, I think the most useful result is the Range of Acceptable Prices.   The lower bound is the intersection of Too Cheap and Expensive (sometimes called the point of marginal cheapness).  The upper bound is the intersection of Too Expensive and Not Expensive (the point of marginal expensiveness).  In the chart above, this range is from $10 to $25.  As you can see, there is a very significant perception shift below $10.  The size of the shift is partly accounted for by the fact that $10 is an even value.  People believe that $9.99 is very different from $10; even though this chart used whole dollar numbers, this effect is still apparent.  Although the upper intersection is at $25, the Too Expensive and Not Expensive lines don’t diverge much until $30.  In this case, anywhere between $25 and $30 for the upper bound would probably make little difference – at least before testing demand.

Some people think the so-called optimal price (the intersection of Too Expensive and Too Cheap) is useful, but I think there is a danger of trying to create static perfection in a dynamic world, especially since pricing research is generally only one input to a pricing decision. For more on the overall discipline of pricing, Thomas Nagle’s book is a great source.

Going beyond Van Westendorp’s original questions

As originally proposed, the Van Westendorp questions provide no information about willingness to purchase, and thus nothing about expected revenue or margin.

To provide more insight into demand and profit, we can add one or two more questions.

The simple approach is to add a single question along the following lines:

At a price between the price you identified as ‘a bargain’ and the price you said was ‘getting expensive’, how likely would you be to purchase?

With a single question, we’d generally use a Likert scale response (Very unlikely, Unlikely, Unsure, Likely, Very Likely) and apply a model to generate an expected purchase likelihood at each point. The model will probably vary by product and situation, but let’s say 70% of Very Likely + 50% of Likely as a starting point. It is generally better to be conservative and assume that fewer will actually buy than tell you they will, but there is no harm in using what-ifs to plan in case of a runaway success, especially if there is a manufacturing impact.

A more comprehensive approach is to ask separate questions for the ‘bargain’ and ‘getting expensive’ prices, in this case using percentage responses.  The resulting data can be turned into demand/revenue curves, again based on modeled assumptions or what-ifs for the specific situation.

Conclusion

Van Westendorp pricing questions offer a simple, yet powerful way to incorporate price perceptions into pricing decisions.  In addition to their use in large scale surveys described here, I’ve used these questions for in-depth interviews and focus groups (individual responses followed by group discussion).

Idiosyncratically,

Mike Pritchard

References:

Wikipedia article: http://en.wikipedia.org/wiki/Van_Westendorp’s_Price_Sensitivity_Meter

The Strategy and Tactics of Pricing, Thomas Nagle, John Hogan, Joseph Zale, is the standard pricing reference. The fifth edition contains a new chapter on price implementation and several updated examples on pricing challenges in today’s markets.

Or you can buy an older edition to save money. Search for Thomas Nagle pricing

Pricing with Confidence, Reed Holden.

The Price Advantage, Walter Baker, Michael Marn, Craig Zawada.

Van-Westendorp PH,(1976), NSS Price Sensitivity Meter – a new approach to the study of consumer perception of price. Proceedings of the 29th Congress, Venice ESOMAR

Filed Under: Featured Posts, Methodology, Pricing Tagged With: pricing, Van Westendorp

comScore’s State of U.S. Online Retail Q1 2009

The recent comScore presentation on the State of Online Retail in the U.S. contained few surprises, but mainly confirmations together with some interesting perspectives.  For those unfamiliar with this material, comScore creates a quarterly report on Online Retail, combining survey results along with data from comScore’s behavioral panel.  The behavioral data covers many aspects of online behavior related to retail, including search, media exposure, and of course actual online transactions.  They also add in some other sources to give information about offline purchasing impacted by online activity.  Some of these results will eventually become available from the U.S. Department of Commerce, but comScore produces their reports several weeks in advance, and consistently close  (the Q4 figures use differing methodologies for gift card transactions, so the spread is wider).

Read the full report to draw your own conclusions (you can sign up here) but here are a few impressions:

Predictably, Q1 2009 saw the end of strong growth seen over the past several years.   I think the results are positive enough to be heartening for continuing success of online retail, although some of the growth probably comes at the expense of offline, as people are increasing the use of online to seek lower prices.

Online retail spending may have bottomed out, but it is unclear when it will start to grow again.  The current overall flatness is a result of a combination of factors for different groups. Lower income households (under $50K) show reduced spending over the same period last year, while higher incomes show some growth.  There is also distinction between age groups, with those under 44 increasing online spending and older consumers holding off.  Looks like younger people are less concerned because of longer time horizons or generally don’t want to defer spending any longer, while the older brackets are saving to rebuild their retirement assets instead of purchasing.  Depending on your perspective on the role of consumer purchasing in the U.S. economy and levels of saving, this is either a good thing or scary for the speed of the recovery.

Online prices, lower at the turn of the year through February, have now increased as inventories have been worked off, and promotional activity reduced to match.

Presumably reflecting the increased significance of comparison shopping and other money saving tools, the Internet has become more important to buying decisions than a year ago.  Three-quarters of consumers do online research before buying offline (I don’t know if this is an increase). And more people are using coupons than ever before, including from online sources.  No surprise, the role of the Internet as an integral part of shopping – both online and offline – is confirmed during tough economic times.  Regardless of whether the sale is completed offline, retailers must pay attention to providing useful information (not just discounts and sales, but also product information).   I was reminded of this recently buying a refrigerator from Sears. Maybe not the best use of time, but it was more efficient to do some preliminary research online, then discuss benefits with a sales person in the store.  In this case, we made the purchase in the store, then changed our minds after looking more thoroughly at home (and had to run the gamut of the Sears phone system to make the change – but that’s another story).  Next time, it will probably be better to do the final check after talking to the sales person with a laptop or smartphone.

comScore’s figures for incremental offline sales from search or display advertising (16% display only, 82% search only) might have you agreeing with the idea that search advertising is much more effective, but comScore points out that the reach is typically much higher for display, therefore the dollar lift may be higher for display.  In addition, the synergy for combined search and display (119% increase) is clear.  Cost effectiveness will vary with situation.

Enjoy the full report!

Idiosyncratically,
Mike Pritchard

Filed Under: Published Studies

SurveyMonkey acquired – what does this mean?

SurveyMonkey is being acquired by an investor group.  Dave Goldberg, who previously led Yahoo’s music business, will take over as CEO, but founders Ryan (current CEO) and Chris Finley will remain with the company according to the news. 

The company will be opening an office in Menlo Park, CA, where Goldberg is based.  From the current job openings, it looks like SurveyMonkey will be moving marketing and administrative functions to Menlo Park, while development and operations will remain in Portland, OR.

This acquisition is a tribute to the great work done by Ryan and Chris over the past 10 years in building the SurveyMonkey capabilities and brand. It’s also evidence of the “growing importance of self-service tools for online surveys” (Forrester Research).  That’s why 5 Circles Research created SurveysAlaCarteTM modular services and training.

It remains to be seen if the acquisition will mean increased attention on enhancing SurveyMonkey features and capabilities. Kara Swisher’s BoomTown column about the news shows that she thinks SurveyMonkey offers analytics.  I hope the comment just shows lack of understanding.  Low-end self-service tools across the board have limited is analytics and reporting.  The best that any of them offer is filtering and “cross-tabs”, but their cross tabulation capabilities are very limited.  In particular, if I want statistical significance testing I need to export the data and use other tools to do individual tests (SPSS or Excel), or run a full report using Wincross or some other product.  Either way, this process is time-consuming and expensive.  Looks like a good opportunity for SurveyMonkey or the competitors to differentiate, or for a third party to provide a decent, inexpensive, add-on.

In any case, congratulations to the SurveyMonkey founders!

Idiosyncratically,
Mike Pritchard

Filed Under: News, Reporting

  • « Go to Previous Page
  • Go to page 1
  • Go to page 2
  • Go to page 3
  • Go to page 4
  • Go to page 5
  • Go to page 6
  • Go to Next Page »

Primary Sidebar

Follow us on LinkedIn Subscribe to Blog via RSS Subscribe to Blog via Email
First, I thought it was near impossible to obtain good market information without a large scale, complex market study. Working with 5 Circle Research changed that. We were able to put together a comprehensive survey that provided essential information the company was looking for. It started with general questions gradually evolving to specifics in a fast pace, fun to take questionnaire. Introducing “a new way of doing things” like Revollex’ induction heating-susceptor technology can be challenging. The results provided critical data to help understand the market demand. High quality work, regard for schedule, thorough understanding of the issues are just a few aspects of an overall exceptional experience.
Robert PoltCEORevollex.com
You know how your mechanic knows what’s wrong with your car when you just tell them what it sounds like over the phone? Well, my first conversation with Mike was like that — in like 10 seconds, he gave me an insight into my market research that was something I’d been struggling trying to figure out. A class like this will help you learn what you can do on your own. And, you’ll have a better idea of what a research vendor can do for you.
Roy LebanFounder and CTOPuzzazz
I have come to know both Mike and Stefan as creative, thoughtful, and very diligent research consultants. They were always willing to go further to make sure respondents remained engaged and any research results were applicable and of immediate use to us here at Bellevue CE. They were partners and thought leaders on the project. I am happy to recommend them to any public sector client.
Radhika Seshan, Ph.DRadhika Seshan, Ph.D, Executive Director of Programs Continuing Education Bellevue College
Mike did multiple focus groups for me when I was at Amazon, and I was extremely pleased with the results. Not only is Mike an excellent facilitator, he also really understood the business problem and the customer experience challenges, and that got us to excellent and very actionable results.
Werner KoepfSenior ManagerAmazon.com
Great workshop! You know this field cold, and it’s refreshing to see someone focused on research for entrepreneurs.
Maria RossOwnerRed Slice
When you work with a market research company you normally have to define the questions. 5 Circles Marketing’s staff have technical backgrounds, so it’s a lot easier to work with them.
Lorie WigleProduct Line Manager, Business Communications DivisionIntel Corporation
Since becoming our contracted consultant for market research services in 2010, 5 Circles Research has revolutionized our annual survey of consumer opinion in Washington. Through the restructuring of survey methodology and the application of new analytical tools, they have provided insights that are both wider in their scope and deeper in their relevance for understanding consumer values and behavior. As a result, the survey has increased its significance as a planning and evaluation tool for our entire state agency. 5 Circles does great work!
Blair ThompsonDirector of Consumer CommunicationsWashington Dairy Products Commission
Many thanks to you for the very helpful presentation on pricing last night. I found it extremely useful and insightful. Well worth the drive down from Bellingham!
G.FarkasCEOTsuga Engineering
Every conversation with Mike gave me new insight and useful marketing ideas. 5 Circles’s report was invaluable in deciding on the viability of our new product idea.
Greg HowePresidentCD ROM Library, Inc.
Mike brings a tremendous balance of theoretical marketing research with a strong practical knowledge of marketing. He can tailor the research to the right level for your project. I have hired Mike multiple times and he has delivered each time. I would hire him again.
Rick DenkerPresidentPacket Plus

Featured Posts

Dutch ovens: paying a lot more means better value

An article on Dutch ovens in the September/October 2018 of Cook’s Illustrated gives food for thought (pun intended) about the relationship of between price and value. Sometimes higher value for a buyer means paying a lot more money – good news for the seller too. Dutch ovens (also known as casseroles or cocottes) are multipurpose, [Read More]

Profiting from customer satisfaction and loyalty research

Business people generally believe that satisfying customers is a good thing, but they don’t necessarily understand the link between satisfaction and profits. [Read More]

Customer satisfaction: little things can make a big difference

Unfulfilled promises by the dealer and Toyota of America deepen customer satisfaction pothole. Toyota of America and my local dealer could learn a few simple lessons about vehicle and customer service. [Read More]

Are you pricing based on cost rather than value? Why?

At Pricing Gurus, we believe that value-based pricing allows companies to achieve higher profitability and a better competitive position. Some companies disagree with that perspective, or feel they are stuck with cost-based pricing. Let’s explore a few reasons why value-based pricing is generally superior. [Read More]

Recent Comments

  • Mike Pritchard on Van Westendorp pricing (the Price Sensitivity Meter)
  • Marshall on Van Westendorp pricing (the Price Sensitivity Meter)
  • Manik Balaam on Dutch ovens: paying a lot more means better value
  • 📕 E mail remains to be the most effective SaaS advertising channel; Chilly emails that work for B2B; Figuring out how it is best to worth… - hapidzfadli on Van Westendorp pricing (the Price Sensitivity Meter)
  • Soumyak on Van Westendorp pricing (the Price Sensitivity Meter)

Categories

  • Overview
  • Contact
  • Website problems or comments
Copyright © 1995 - 2025, 5 Circles Research, All Rights Reserved