reihan salam: hi everyone. i'd love to invite amandalenhart of pew, victoria grand of youtube, and del harvey oftwitter, up to the stage for our first panel which is onplaying nice, the challenges of the digital playground. so i should warn you guys, i'mgoing to very rigorously keep you to your five minuteintroductions and discussions so that we can launch intowhat will hopefully be a ferocious, feisty, contentious,discussion about
the digital playground. adora has talked a lot aboutsome of the dangers associated with these new civic spaces. and it does seem as though asthe internet has become more central to our lives, thatmuch more of the kind of conversation about civic lifeis happening in that space. and that obviously introducessome complications and also raises anxieties, particularlyamong parents of young children.
so victoria, would you like tokick off the discussion? victoria grand: sure,yeah, i'd love to. my name is victoria grand, andi look after communications and policy for youtube and it'sgreat to see a lot of familiar faces here. now just as we've been herethis morning, in the short span of time, about 100 newhours of video have been uploaded to youtube. that's about 7 yearsof new video a day.
and in fact, more video isbroadcast in youtube in just a month's time, than is broadcaston all three major us networks over thepast 80 years. so the scale of youtube isphenomenal, and as adora showed in countless reallyamazing examples, many which i haven't seen before, thediversity of content that we see every single day isphenomenal as well. so in those hours are sort oflady gaga's newest video, baby's first steps, a syriandemonstration, and a
teenager's rant, sort ofall combined together. and lots and lots of sort ofcute kittens and dogs on skateboards as well. many of these platforms, what'sincredible, is that many of these are basically 7years old, maybe 8 years old, and it's hard to imagine,especially if you're in the younger generation, a worldwhere the facebooks, and the twitters, and the youtubes, andthe tumblers don't exist. because these platforms are suchcritical communications
platforms, we always start fromthe premise, every single day my day is filled with sortof phone calls about, what do we do about this video? or this has been uploaded. and the starting point for usis always a bias toward free expression, toward leaving asmuch content up as we can. and there are times when it'sa lot easier, obviously, for things like child abuse, andpornography, and content that's illegal, these callsare very easy to make.
and then there's content that'svery challenging. it's hard to know how toproperly balance the right of people to express themselves,versus our ability to maintain this vibrant and engagingcommunity to exchange ideas. some of the issues that we'veseen coming up recently are some of the ones that actually,adora talked about. and if we can click hereon the first video. this am i pretty trend. and this has now become thememe, starting to become a
meme of the conference. so we saw this starting about amonth ago and it actually is about a four-year-old trendthat's been across youtube and twitter for some time. these are young girls who geton the camera to ask, am i pretty, am i ugly? and the question is, is thisa disturbing teenage trend or is it not? the girl in this video talksabout how all of her friends
tell her she's pretty but shedoesn't really know she believes them. so she wants to sort of crowdsource the answer by asking a lot of people. and you can imagine that theresponses are sort of all over the place, right. there are some that are positiveand affirming, actually probably more thani thought there would be. and then there are some thatprobably do nothing for sort
of a teenage girls fragileself-esteem. and the question that we faceas a private company is, should a private company takeaway the right of a girl to ask the question to theworld, am i pretty? is that our role to takeaway that right? and is it our role to take awaythe right of any of you, or any of us, to say to anybodyelse you're pretty or you're ugly. obviously there are going to beextreme responses to this
question that go into a kindof hate speech territory. but the baseline, what dowe do with this video? do we sort of decideto make it go away? what's interesting about thistrend also is that we saw that one of the most famous, "am ipretty" videos, recently we found out that it was actuallya 21-year-old art student who had posed as sort of a14-year-old girl who was putting on these videos. other trends that we've seenrecently, and again this is
just sort of part ofour day to day, the cinnamon challenge. these are girls who, and boys,from all over the world who will [video plays] so what she's going to do isshe's going to fill up a tablespoon of cinnamon and eatit, and then she's going to have some kind of a reactionthat ranges from sort of gulping, gasping, maybe allof the way to potentially throwing up.
though they usually don't,it's just sort of repulsive, right. and so doctors have come outand said this is bad for people's respiratory systems,it causes asthma, youtube please take these videos down. and again we're sort of leftto assess, ok, how dangerous is this? do we need to takeaction on it? a very similar trend withaxe body spray.
so, in this situation what'sbeen happening is people have been taking this axebody spray and putting it in their mouths. again, just these grossout videos. axe body spray contacts us andsays, this stuff is actually really dangerous to ingest,youtube, take action against this. and what we have to think aboutis, again, do you take away all of the links,axe body spray
challenges off the site. another one that we've lookedat, and i know i've talked with several of youabout these in the child safety community. [music] and i'm just going to caveatthis by, the next scene that you're going to seeis a bit graphic. ok, i'll switch toour next video. but the cutting videos arereally interesting.
people are uploading videos andin large part these videos are actually public serviceannouncements. they're psas by fellowteenagers. in many cases, they're peoplewho are either saying, don't do it, or who are documenting ina neutral way and trying to explain what self injuryis all about. i think we found, there was astudy that was done that said that only in about 7% of thecases of all the self injury videos that are uploaded toyoutube is it actually
promoting the actof self injury. and so the question is, we havesome advocates who will say, actually these are reallyauthentic voices. it's very important tokeep them on youtube. and then other researchers whosay the very act of looking at the cutting actually triggersadditional cutting. these videos should all beremoved from youtube. so interesting balance there. and then finally thesmoking videos.
we recently talked to a groupof state attorneys generals who asked us to remove videosof people smoking on youtube as a whole because when you seeimages of people smoking it means that they're morelikely to smoke. and so, as you can imagine, wehave videos of people smoking in the back of a syriademonstration video on the curb. it's a global site. so you can imagine what amassive undertaking it would
be to remove all imagesof people smoking. and so again, these are justsome of the questions that we deal with every day. and this is just in theteen safety context. you can also imagine that wedeal with questions of, what do you do when terrorists usethe site as a distribution platform or for recruitment? what do you do with all mannerof sexual content and even graphic content?
is their content that's justtoo graphic for youtube? the stuff that's coming outof libya and syria. should that just not beallowed to be seen? what do you do in cases likesexually charged cases? what do you do withbody paint? what do you do with man boobs? what do you do-- there'sjust all kinds of cleavage of all sorts. so these are the types ofquestions that we sort of
grapple with every day. reihan salam: thanks so much,victoria, your remarks were delightful, informative,provocative, and a bit over time. so del, i'm going to hope thatyou're going to be a little bit better on that front. del, you're with us from twitterwhich is a kind of fascinating new civic space, andplease tell us a bit about yourself and also aboutthe work that you do.
del harvey: so i'm del harvey. i have been at twitter since thedawn of time for twitter, -ish, which is like october2008, so i mean kind of a long time. in internet years, it's anincredibly long time. and i head up the trust andsafety team there which is actually an umbrellaorganization for a number of sub teams covering everythingfrom account activity and abuse, to brand policy, to usersafety, to legal policy,
to ads policy. there's a lot of policyas it turns out. policy and abuse. so essentially it's anything todo with misuse of the site ends up coming to my teamfor us to deal with. and of the things that we see iam fortunate in that we are not hosting video, which igreatly appreciate in terms of it making my job easier. but we definitely see such anarray of content, you could
never actually watch all thevideo uploaded to youtube every day even if you hireda battalion of people. similarly, if you were toactually try to read all of the tweets it would notgo so well for you. and that kind of is conflatedwith some other parts of twitter that make itparticularly complicated in that we have very littlecontext for tweets. you get 140 characters. if it's a single tweet, youreally just have that 140
characters, no other informationabout what that tweet actually signifies ormeans, which becomes relevant when you see a conversation thatstarts out between two people and the first @reply is hey bitch. and you're like, all right,well, there are number of possibilities here. this could be somebody beingmean to someone else. this could be two friends, orthis could be two accounts that are pretending to be dogsand all of these have a chance
and do, in fact, exist. wehave a very healthy, for example, my little pony roleplaying community. it's complicated to make thedecision of, oh this shouldn't be allowed, or this shouldn'tbe allowed. in general, twitter isvery much a platform. we view ourselves as a platformthat is there for folks to sort of useas they see fit. and we found, as victoria, ithink, has as well, that in a lot of ways the community doesdo self-moderation also.
and if somebody's out of linein a group of people, then usually that group of peopletells the person they're out of line. in terms of actual content thatgets posted, there's such a sprawling variant of use typesfor twitter within that 140 characters which is kindof impressive, that we also run into the same problem of,even if we were to mediate content which content would fallwithin the area of this is what should be mediated?
and that's something thatobviously we have a lot of conversations with folks aboutbecause we do allow, for example, parody accounts. and we've had people say, wellyes, this is marked as a parody account, it's incompliance with your parody policies, but it's not funny. which sometimes it's not. and that actually doesn'tnecessarily though mean that we want to remove it or thatit is a wrong to have.
one of the biggest things thatwe try to reiterate is that we really strongly believe thatthe correct answer to what someone perceivesas bad speech is actually more good speech. in that, inevitably, if youremove content what happens is it gets re-posted. and it often gets requestedby 30 people or 50 people or even more. and it's kind of thedemonstration of the streisand
effect, where a photographerwas doing a documentary on erosion along the californiacoast, and one of the images included an image ofher house in it. and she sued to prevent thatimage from being included in this coffee table book, meaningthat exponentially more people now saw the imageand knew it was her house, then had the coffee tablebook just gone out. so the attempting to removeinformation almost always results in it being distributedmore broadly and
that's one our probablybiggest challenges. reihan salam: thanksvery much, del. and you were under time. very slightly, but a littlebit under time. i thank you for it. and i have a lot ofquestions for you. but amanda, a lot of yourresearch is specifically about young people and how they'reusing the internet and particularly how they usemobile technology.
but i was wondering if youcould illuminate our perspective with a bit of yourscholarly work at pew. amanda lenhart: sure, sure. so i'm a senior researcher atthe pew research center which is a nonprofit, nonpartisanresearch center based in washington dc. because i'm kind of the datalady on the panel, i want to start out by talking a littlebit about what do we know. and so one of my last largepieces of research was on
digital citizenship, writ large,i think is a real theme that we have here today. and so i just want to make surethat we have a sense of what teens think an experiencewhen they go particularly to social spaces online. and so when we interviewed teenswe found that, first of, american teens are socializingin digital space, and i think that's an incredibly importantthing to remember. i think we all, in this room,know it, but it's always good
to remember that 80% ofkids are actually using social media. it's also important to rememberthat we ask teens whether or not they felt that,in general, social spaces were kind or unkind spaces. we wanted to know the emotionalclimate of online space, and the majority of kids,about 69% said people are mostly kind online. so in general, teens have thissense that social space online
is a positive place. but it's important to rememberthat 20% of teens say it's actually not a positive place. i'll talk a little bit more ina minute about some subgroups who are more likelyto say that. but even with this positivesense that teens have, it's important to rememberthe teens also witness negative behavior. 88% of teens say i have seensomebody else act mean and
cruel to another person ona social networking site. that's a lot of kids. but i think if you ask kids thatsame question about, have you witnessed somebody beingmean and cruel to somebody in the hallways of your high schoolyou would probably get a similar number. it's also important to know thatfor most people it's a witnessing experience, it'snot a personally felt experience.
only about 15% of kids say thatthey have had somebody be mean and cruel directly to themin a social media space. it's also important to rememberthat teens are actually saying and telling usthat they stand up for other kids when they see thembeing harassed online. certainly the most commonresponse to witnessing people being mean in a social spaceis to ignore it. and that, in many ways, makessense because often if you're an adolescent, you may not knowthe full context about
what you're seeing in digitalspace, it might be a joke, there might be a lot ofback story that you have no idea about. and in fact in our focus groupsteens told us that they were more likely to stand upand defend somebody who was closer to them where they'remore likely to know the back story. and so as i said about 80% ofteens have actually stood up for somebody they've seen beingbadly treated online,
but because this is a yin andyang kind of experience, we also heard teens tell us thatthey joined in in the harassment. about 20% of kids said, yep,i've joined in, i've piled on, i've helped to harass anotherperson in a social space, and about 2/3 of teens have actuallywitnessed other people do this. so it is both a generally kindspace, but it is a space where teens witness, and experience,and in many cases, enact
cruelty to each other. so i think we then have to askourselves what kind of space do we really want? what kind of expectations forperfection do we have in social space? should we be more worried alsoabout the kids who are more likely to say that the internet,that social media is an unkind space, and that's,in general, middle school girls and african americanyouth, substantially more
likely to tell us that peopleare mean online, people are cruel and unkind. i also think we want to talktoday and ask ourselves about what is the role of parents andadults in helping to set norms around online behavior? because we see this. we know that teensact badly online. we know that generally theydon't, but they see it. but what is our roleas adults?
and when we ask teens aboutwhere they heard about general information about online safety,parents were the dominant place. in fact, parents have aremarkably powerful role in teens' lives. teens, in fact, told us thisthemselves which is actually sometimes quite difficult to getadolescents to admit, that their parents were incrediblyinstrumental in helping them to think through many of thesekinds of norms of behavior.
also important to note thoughin the data that an enormous number of teens told us, theaverage teen said that five different sources of informationwas where they got their general informationonline about safety. so that includes teachers whocome in sort of second, right behind parents. that also includes websitesand web properties. it includes people like youthpastors and coaches, and your best friend's mom.
there's an enormous number ofplaces where teens are getting this information. and so, in many cases, it takesa village finding, that really many, many people arecontributing to teen's understandings of howto behave online. it's also, i think, importantto think about specific incidences. so we asked teens, ok, so whenyou have a specific moment where you've witnessedsomething
online, what do you do? about a third of teens arewilling to reach out to other people and say, hey,i need some advice. another 2/3 aren't. again, we don't know exactlywhat these incidents are about, we don't know theirseverity, but when teens do reach out they reachout to their peers. so again, even as we think aboutadults in their role in helping to set norms for thisonline behavior, peers are
actually where teens are goingfor specific advice in a particular situation. so we can't ignore theimportance of peer education. and finally i just want toproblematize this reliance again on adults andpeers and parents. i mean certainly, we often, ithink in a space where we're trying to figure out how tohelp people we want to use education as our main way offixing some of these problems that we see in internet space.
but i think we also need toremember that not every parent is capable of being the personto give advice to every child. not every adult is agood role model for how to behave online. that oftentimes we haveexpectations for our children that are adults and that weas adults don't meet. and so i think we need to askourselves about what is a reasonable set of expectationsto have and what kinds of trade-offs are we willing tomake to get to that point?
and i'll stop there and lookforward to the discussion. reihan salam: thanksso much, amanda. so victoria too, that last pointthat amanda just made. the impression that i get fromsome of the issues that you raised in your discussion isthat many people are looking to youtube to adjudicatethese much larger social and civic issues. for example, cigarette smokingis a bad thing, ergo, i want you to remove images of it.
and i kind of wonder how youfeel about that because given that youtube is actually servingthe civic function, many people believe that well,ergo, you have this larger civic responsibility. yet you obviously have a lotof other different agendas that you're seekingto fulfill. and i wonder how you navigatethat terrain. victoria grand: sure, i thinkit's a challenge when people expect a private company tosolve these disputes between
two children, for example, ora situation like exposure of children to tobacco. one of the things that happenedwhen the state ags came to us, is they said, welllook, we've approached the mpaa and the mpaa is willing totake a look at this and to do more to not showimages of people smoking in their movies. and the difference between thempaa and what they can do, and what youtube can dois that we don't
actually create the content. and we can't edit the contenteither, right. and so it's a differentkind of situation. i think it's important to noticethat people come to you to be entertained primarily,but i do think that it's a good place to be educatedas well. and del and i, when we see each,other talk about can we be doing more to raise theprofile of the education resources that we have. i thinkit's an ecosystem where
we all can probablybe doing more. i think schools can be teachinginternet ethics in a much more direct way. but obviously there's a lot theplatforms can continue to do to educate people. reihan salam: del, twitter isn'tthe first thing you've done in the space of safety. earlier on you worked to protectchildren from online predators and i wonder, you talkabout a very wide range
of issues, but i wonderspecifically with regard to children and feelings being hurtand this kind of domain, this seems to be an area inwhich there are a lot of people-- and amanda was talkingabout parents and their attitudes andexpectations. i wonder is this particularlyfraught? is this a lot of the kind ofcommentary and feedback that you get where people are saying,my child was hurt in this way, by these comments, andi want you to do something
about it, i want you shut downthis particular account? del harvey: we actually don'thave that significant of a teen presence as compared to anadult presence on twitter. there's not as high of anumber of teenagers. and that, however, certainlydoesn't keep adults from having their feelings hurt. so i can still certainly speakto people who have been hurt by comments that they'vereceived. it's actually been veryinteresting to watch the
evolution of how folks handlegetting mean comments. because the first kind of reportthat you get is this person said something terribleto me, you need to stop them. and sometimes you go look andthey actually said something terrible first. but the otherperson replied and that was the part that wasn't ok. and sometimes it's more like,well this person said something terrible about me andyou go and look and the person said something terribleand then they tweeted, this
person said something terribleabout me, and look at what they said, and now all theirfriends have jumped on that person that said theterrible thing. this weird sort of you need tostop this person from saying mean things, but i've alreadytaken care of it also, over here, this other way. but why do you let peoplepost mean comments? and it's a really kind of weirddisconnect because i think one thing that we end uphaving to tell people a lot is
if somebody really wants tocreate an account on a site, they can create an accounton a site. which by i mean, say thatyou have an account that absolutely goes beyond the paleand it gets suspended. so they can get a proxy, and adisposable email address and create 30 accounts, or 50accounts, or however many accounts and use all of thoseaccounts to do the same thing. the idea of suspension orremoving the content or anything like that actuallyresolving this conflict has
been pretty thoroughlydisproven, at least in my experiences. what we found is actuallyhelpful instead is folks just like, hey, that was kind of ajerky thing you said, like what's up, dude? reihan salam: amanda, i wonder,one way of framing this discussion is thatessentially what we're doing is reproducing certain kinds ofpatterns and certain kinds of inequalities that exist inthe offline world, in the
online world as well. and i wonder if you think thatthere is any legitimate role for the public sector forregulation because it's natural to say that, i'm aprivate sector organization, it's not reasonable to expect meto kind of narrowly tightly police all of this contentthat's being created in the name of some kind oflarger civic voice. because, again, theseare global organizations in many cases.
there are many different localstandards around these issues. but i'm curious, because you'reinteracting with a lot of people in the publicpolicy side i imagine. and going back to this idea oftrying to cultivate digital citizenship. should there be some kind ofrole on the part of these organizations that are playingthe civic role to do that? amanda lenhart: that's anexcellent question. i have to preface my responseby saying that the pew
research center is strictlynonpartisan and therefore i can't make any kind of policyrecommendations. reihan salam: tell us about thestate of the debate and sort of what some mightsay on that front. amanda lenhart: yeah, icertainly think there's, as you point out, there's a lot ofincredible complexity here. these are global corporations. what holds in one jurisdiction, doesn't hold in another.
the privacy regulations ineurope are vastly stronger than the privacy regulations inthe united states, and how do companies manage all ofthose different issues? i do think that, thinking aboutthere's certainly a lot of calls, i think on one hand,there's the side of advocacy. there's the side of those whowork with children who see the damage that some of theseexperiences can have to children's psyches and who feelvery strongly and have a lot of concern that somethingmust be done.
but on the other hand, obviouslywe have is this need to protect the right of othersto engage in free speech, as victoria has pointed out. so i think those are the realtensions behind the debate. i do think that one of themiddle grounds that often gets proposed is education,because that can be incredibly localized. it starts at the user. it doesn't require a particulartechnological
intervention. it doesn't require regulationwhich makes companies flip out. but what it does require is itrequires a lot of work on the part of parents, and requiresa lot of work on the part of the end user. and i think one of the questionsthat really is before us is, how capable is theend user of taking some of these steps?
and so certainly when you'redealing with adolescence you have a whole overlay ofemotional difficulties on top of not necessarily being asfamiliar with the technology. and i think we haveexpectations, i think in this room, we're all incrediblytech savvy, right. we are comfortable withthe technology. everybody's got their laptopscracked open. but you go back and think aboutyour relatives, think about your mom, think about yoursecond cousin, susie, who
maybe doesn't knowwhat's underneath the hood of her computer. and who doesn't want to know. and who has a verybasic knowledge. and then expecting those folksto really be able to rise to the level of saying, hey, myson or my daughter really needs to take these six stepsto protect him or herself in the online world, when theparent themselves doesn't even know that those stepsneed to be taken.
so i think there's a lot at playhere that makes this a complicated space. reihan salam: before i open itup for questions, i have one last question for deland for victoria. i wonder, do you think thatthere's been an evolving understanding on thepart of users? that is, do you think thatpeople have a greater appreciation that you guys arein a way a platform for other people's voices, rather thanan active agent in deciding
what kinds of contentyou seek to promote? victoria grand: yeah, i thinkone thing is we're not advocating as sort of noaction approach here. i think if you look at what ourteens do every day, they spend a lot of time looking atwhat is our turnaround time for taking down porn the getsuploaded to the site? look at a site like achatroulette, for example, something that could justimplode if you allowed a lot of porn and a lot ofnudity to go on it.
and so obviously we deal a lotwith the porn situation. we have a lot of user control. so, for example, for the "am iugly" videos, that user who uploaded that video had theability to turn comments off altogether, to moderateher comments, to block people who commented. users also have the ability tosay, hey, i appear in this video and i did not consent, iwant this image of me removed through this video.
and we're working right now onblurring technology that will allow people to blur imagesof other people in videos. so i don't think we're sayingstand back and do nothing. and i do think that the socialnorms are growing around it. i feel like when we launchyoutube in any new country we usually see a massive spikein flagging, and then very quickly it adjusts itself andpeople start to learn what is and is not acceptableto upload. del harvey: and i'd saysimilarly we have a lot of the
same, we're also not hands off,there are things that we do not allow and we veryactively try to protect against as many of thoseas possible. i would say the other thingthat we're working on, and that we actually talk to a lotof the other companies in the space about pretty regularlyis trying to actually share more information and more of thework that we're doing on the educational side betweeneach other, also. because, for example, victoriaand i have often chatted about
this is a space where it'snot competitive. right, this is helping peopleout, keeping people safe, helping kids, educatingpeople. i'm not going to be like,victoria, just saying, my research is a littlebetter than yours. right, it doesn'tbenefit people. this is one of the realms whereit really is we can all work together without having tobe worried about, well, the page views for that helppage over there
are bigger than mine. i got to get some peopleviewing this now. there's just so much morehappening in that space then we really ever havehad before. and it's been over the pastprobably two years for twitter at least, that we really starttalking with other companies and trying to make sure thatthere is this ongoing dialogue of what should we be addressingas companies? what are users not aware of?
what do they need toknow more about? et cetera. and you could probablyspeak to the same. victoria grand: yeah. i think it's hard. it's like feeding peoplespinach though. people come to the platformto be entertained. they might not go to the safetysection and so a lot of the conversations have beenabout how do you involve youth
and teaching youth. and how can you create viralityaround the phenomenon of flagging, of privacycontrols, of those kinds of things, because it's hard to getpeople to focus on them. reihan salam: does anyonehave any questions? i don't want to eat too muchinto your break time but there's a gentlemanback there. audience: hello. how do you deal with exposingwhat are appropriate memes or
discussion models. because i think that's aninteresting point that people can say what's appropriate forthemself, either in their profile, or in thediscussion you're hosting around some content. that would let people, kind oflike we did with the internet, [unintelligible] and runningreal time discussion. so that would be something thati think there maybe is a model there, and maybe youcould discuss that.
that's scalable. reihan salam: amanda, wouldyou like to field that? amanda lenhart: i'm notsure that question is directed at me. reihan salam: if there'sanyone who'd like to field it, please do. victoria grand: i think the vastmajority of the content that's on these platformsis acceptable. we've tried to do things likeblog about, for example, when
content is coming in from syriaor from libya or from citizen journalists on theground, oftentimes it's very graphic and it will be a videothat's coming from a cell phone that will be labeled withsomething like, one, two, three, four, and it'sextremely graphic. but it doesn't have any contextattached to it. so it's very hard for us toreckon with, can we leave this up without any context forpeople to know why there's a brain on the groundhere, right?
that it's not just sort of ashock and disgust type video. so we've tried to do blogsaround this topic, around things like artistic nudityand how to make sure that those videos can stay up andthey're not taken down. again, i think the issue thatwe're always tackling is this limited attention span. and i think having somebreakthroughs with the teen safety community around, and i'msure you guys think about it every day, how canwe battle that?
because those blog posts don'tget viewed nearly as much as the videos. how come we partner with thekoney's of the world which got, gosh, i think that videogot more views than any other television show inthe us this year, apart from the superbowl. how can we bring the awarenessto that level? it's very hard to cutthrough the noise. audience: hi.
i have a seven-year-old, aneight-year-old, and an eleven-year-old, andthey're constantly asking me to go on youtube. and i know that i can't put themon youtube and leave the room and go do the dishes. i have to be right there becausethere's an opportunity to click away to anything. what are the age ranges thatyou would recommend for use for youtube when you can, ihate to say this, but walk
away and leave them watchingthat video while you go and do laundry. what are the termsof use for age? victoria grand: yeah,they're 13 plus. and i know that there are a lotof parents that go onto youtube with their kidswhen they're under 13. we also have a safety modesetting that you can set. and that means that videos thathave been marked for 18 plus, do not get presented.
the other thing that happenswith safety mode is we actually have an algorithmthat scans all videos for flesh tones, believe it ornot, and videos that are deemed to be high on fleshtones are not included in safety mode. that does capture a lot ofbaby videos as well, so there's some false positives. but, yeah, we say thatthe age for sort of surfing solo is 13.
and i know that many people saythat's not realistic and go on with their children beforeto teach them some of these norms. you got to my question a littlebit in terms of what type of models are being builtto have this predictive development to conflict on theseplatforms. so i just wanted to have an open question,if there were any other predictive algorithmsor models that you all are building to find conflicton twitter or
youtube before it happens? victoria grand: sure. i think what's interesting,whenever we talk about controversial content on youtubeand we say well, the scale is so massive,people have no sympathy for that argument. and i'll tell my friends this,and they'll say, but you're google, figure it out. and the challenge is that analgorithm actually can't do
most of this work. so even when you're talkingabout things like nudity, an algorithm isn't going to beable to know whether the nudity is being presented in abreast cancer documentary, in a surgery, and an artisticnudity context. and so actually what we do is weuse algorithms to organize videos for review. so you scan for flesh tones, ifit's high on flesh tones it means people are going tobe reviewing it faster.
you scan for things like theflagger's reputation. how accurate is thisflagger usually? do they always flag the justinbieber videos because they just want to take him down? if so, down in thequeue, right. has this video already beenflagged before and been reviewed by a human? down in the queue. how hot is it?
what is the flagto view ratio? very high up in thequeue if it's hot. so we use algorithms to help usto prioritize the review, but ultimately a human does needto look at those because so many of these decisionsreally do turn on context. you think about the n word. we can't do a scrub of all ofyoutube and just remove the n word from every single commentbecause it might be self referential, it might be in aneddie murphy video, comedic.
there's so many different waysthat context comes into play that any use of just a broadalgorithm would be over broad. and it would be, from our pointof view, censorship. reihan salam: del do you haveany thoughts on the use of predictive analytics? del harvey: i would say thatwhat we actually use are algorithms for more, and iactually know you do this as well, is actually dealingwith the spam component of it, right.
so sure this is a lovely videoof a child gurgling at a cute puppy, and the comments issomething along the line of, great video, you should checkout my site, cheapcyalis.com. and you're like, you know,i don't think you watched this video. i'm pretty sure. and so we actually use a lot ofstuff to identify spam and remove spam, and i wouldactually wager that one thing that i've seen that i wouldimagine that youtube sees sees
as well is if somebody'sa really a bad actor on your site. like they're just doingterrible things. they're not just doing terriblethings in terms of like being mean at somebody,they're also creating multiple accounts. they're sock puppet accounts. they're straight upimpersonation on something. they're violating other rules.
and we see a lot of bad actorsagain, that get flagged because of spam violations,essentially. even if what you might say,hey, this account's bad because of x andit's not spam. reihan salam: guys wehave a break now.