I've been teaching/tutoring people on how to program since my high school days. One of the best tools I've come to rely on is being able to show my student what the end result of the program looks like and how it behaves. Its often much harder for people to realize what you are trying to do with specific lines of code without having seen what the end goal is. Having seen the final result and working backwards along side you to figure out the bits and pieces that would be required to make the end result work is something far easier and more natural for people based on my experience.
#4 sounds like an awesome idea. who's with me? finding the right balance between how much people are willing to pay based on the quality of the teachers and making it worth while for quality teachers to spend their time would be a challenge I'd imagine among other things but I really do like the premise.
Let the teachers set their own prices! Students can decide whether a teachers reputation, past students, style, etc. are worth the cost being asked for.
My name is Scott Mandel, I'm the Co-Founder of Snapclass. You are right we are similar to this idea. Snapclass connects teachers and students live via to exchange knowledge. Anyone can be a teacher and host classes for free or for a fee. Students can choose their teacher based reputation (reviews and rating), profile and availability.
We're launching in the next two weeks and we are looking for talented people.
- Talented backend developers in ruby on rails
- Java developers with experience in real-time video conferencing
I had a similar idea a few months back, using video teaching but with very talented people in the 3rd world say India and 1st world (to start America) students
Something interesting I think the article did not mention is that given the difference between the two sets of experiments, where you roll once vs you roll 3 times.
As a participant I'd feel more inclined to lie about my first roll if I rolled a higher number in my second or third attempt.
I'd be very interested in the results if n was not 76 but instead 7600.
From the paper (http://home.medewerker.uva.nl/s.shalvi/bestanden/Shalvi%20et...), page 5-6: "Shalvi et al. (2011a) asked participants to roll a die under a paper cup with a small hole at the top allowing only them to see the outcome, and earn money according to what they reported rolling (1=$1, 2=$2, etc.). As participants’ rolls were truly private, the authors assessed lying by comparing the reported distribution to the distribution predicted by chance (Fischbacher & Heusi, 2008). Participants were asked to roll three times but to report only the outcome of the first roll. Although all three rolls were private, the distribution of reported outcomes resembled the distribution of choosing the highest of the three observed rolls. Modifying the task to allow participants to roll only once reduced lying. Participants clearly found value in being able to justify their lies to themselves. The authors concluded that observing desired counterfactuals, in
the form of desired (higher) values appearing on the second or third (non-relevant for pay) rolls, modified participants’ ethical perceptions of what they considered to be lying. Observing desired counterfactual information enabled participants to enjoy both worlds: lie for money, but feel honest."
Shalvi et. al. (2011a) is:
Shalvi, S., Dana, J., Handgraaf, M. J. J., & De Dreu, C. K. W. (2011a). Justified ethicality: Observing desired counterfactuals modifies ethical perceptions and behavior.
Organizational Behavior and Human Decision Processes, 115, 181-190.
Yes, that is interesting and true. The Economist article is a bit lame.
76 was enough to convince me that the effect isn't all publication bias (P<.01, and it's intuitively plausible). I would rather see a more diverse population (than college freshmen) than larger numbers.
The call for larger sample sizes isn't always appropriate. It can often lead to spurious inferences.
As Jacob Cohen (famous statistician) has said, "all null hypotheses, at least in the two tailed forms, are false".
That is, with nearly any hypothesis about differences between groups, given a large enough sample size, you're likely to find a significant difference.
i think the difference here is between the two different experiments (one with three rolls of the dice and one with a single roll) and not a difference between the groups. You could choose to use the exact same group of participants for both experiments.