Are you in a marketing filter bubble?

“Follow and interact with people in your niche” is a common advice to hear when starting up any kind of social media presence. However, what happens when you are surrounded by content that voice the same opinion and never challenges you? You might find yourself in a filter bubble and/or an echo chamber.

What are filter bubbles?

Filter bubbles, conjoined by Eli Pariser, refers to when internet users are presented with content and information that confirms the users’ own ideologies, interests, and believes. This is made possible through the countless of personalized algorithms that filters the content you see.

We know that social media platforms want users to spend as much time on their platforms as possible because more time on the platform = more gathered data = more revenue. In essence, social media is not free, it’s a trade off between users accessing the platforms and the platforms accessing our data and attention. How does social media platforms make us spend more time on the platforms? By introducing algorithms that personalize and present us with the content we want to see. So, when you are offered suggestions on pages and accounts to follow on Instagram, those suggestions are based on the people you already follow or content you like.

Another similar, echo chamber, describes more or less the same phenomenon as filter bubbles. However, while filter bubbles are imposed by algorithms, echo chambers are enhanced by the users themselves in addition to the algorithms. For example, when I was vegan for 6 years I created my own echo chamber by unfriending a person on Facebook who posted something negative about vegan food. The echo chamber is then basically a collaboration between algorithms presenting you with content you want to see and the users adding and removing content based on their own believes and values.

Why can filter bubbles and echo chambers be a problem?

Filter bubbles and echo chambers might not seem like a bad idea from a marketing and consumer perspective. As a consumer, you only see the content that makes you feel good and unchallenged; everything you agree with. As a marketer and a brand, you can be assured that most of the people who see your brand agree with your brand values.

However, when it comes to politics and society as a whole, filter bubbles and echo chambers can have significantly dangerous results. Political polarization, a term that describes two groups with distinctively large differences in political believes, has been in the centre of the discussion of filter bubbles and echo chambers. In the worst case scenario a combined filter bubble and echo chamber can become something that looks very similar to what is discussed in the Netflix documentary The Social Dilemma. Some even suggest echo chambers could have contributed to the storming of the US Capitol January 6th 2021.

However, not everyone believes that echo chambers and filter bubbles have as big of an impact on our believes and ideologies, rather that we already have those believes and ideologies from our previous experiences and lives. What is then believed to be the determining factor is things like ‘selective exposure’, ‘selective trust’, and ‘confirmation bias’. You can read about all of these aspects in combination with fake news in another post I have written here. In short, selective exposure means occurs when someone actively look for content that matches their own believes. Selective trust describes when someone only trusts content that matches their own believes. Confirmation bias refers to combining selective exposure and trust with also interpreting and remembering content in a way that matches their own beliefs.

However you look at it, only seeing content and information that you agree with means missing out on different perspectives, growth, and the security in knowing why you believe what you believe in.

What does filter bubbles and echo chambers mean for growing skills and expertise in marketing?

Photo by Liza Summer on Pexels.com

Let’s change the previous sentence into something that might make more sense for marketers:

Only seeing content and information that you agree with and know means missing out on different perspectives, growth, and the security in knowing why your preferred marketing strategies are optimal.

Whether you are working in marketing or running your own brand, finding yourself in an inspiration and information filter bubble or echo chamber is never good. In my experience, many marketer with specialized skills sometimes stick to their sections in an attempt to become the best in their section. However, everything in marketing, everything is connected. By expanding our perspective we can learn a lot on how to improve in our own section by looking to those who specialize in a different section. This also goes for background. How can you know that your perspectives are the ultimate perspective if you only follow and engage with white, heterosexual, cis-gendered, able-bodied men? People with different background sometimes have very different perspectives, and it’s important to listen to and take into account their experiences and knowledge.

In terms of niche, if you are running a shoe brand and you only keep up with shoe or clothing brands you might find it harder to stand out than if you also saw how different travel brands markets themselves.

As a marketer on Instagram it is sometimes easy to see several accounts speaking on the same marketing topic with very similar views. In some cases you might have a different perspective, which is great as you will then stand out. But if you are only seeing these accounts on your feed you might be stuck only seeing it from that point of view yourself.

This is why I always encourage marketers, and everyone else, to find inspiration from other fields with other perspectives. I make sure to follow similar pages on my main Instagram, but have a separate account where I follow a range of different types of accounts where I gather inspiration and new perspectives to again make new perspectives for my own.

How do you break free from the filter bubble and echo chamber?

While echo chambers can be somewhat easier to break out from, filter bubbles is harder as they are controlled mostly by algorithms. But there are steps you can take to make sure your social media feeds are as diverse as possible. However, diversity does not mean following accounts that have believes that directly goes against your core believes. It is important to know the difference between new perspectives that you might not agree with and opinions that are deal breakers who spread fake news and hate-speech. Find your moral middle ground and challenge yourself, but don’t make your feed an unpleasant place to visit with no value.

  1. Diversify who you follow: Are all he accounts you follow somehow following each other? Try finding accounts to follow on the topics you are interested in that have no connection to those you already follow. Don’t follow the accounts in your suggestion, but actively search for the different terms and look through accounts that seem different to the ones you are currently following.
  2. Stop muting/unfollowing accounts you don’t agree with: As said, you must find your own limit, but try to really examine why you mute/unfollow someone. Is it because their opinions challenge your own? If you are secure in your opinion then someone, in most cases, voicing a different opinion shouldn’t make you feel defensive or challenged. Try holding off on the mute/unfollow button for a while to see if their perspective can give you a chance to grow, learn, or become more secure in your own opinion.
  3. Say no to personalized anything: Personalized ads and feeds is often the start of a filter bubble. Anytime you have a choice of having something personalized online, deny it.
  4. Change the way you use search engines: Different search engines will give you different results. Try changing up your search engines every now and then to see if you get different results. Another thing to try out is not only relying on the first page, but clicking through to the other pages of your search results.
  5. Avoid having social media be your main channel for news: Social media doesn’t always have the room for longer articles and sources. A way of breaking out of a filter bubble and echo chamber is to gather your news from several diverse news pages.
  6. Practice critical thinking: Critical thinking should be used with every piece of content you read. Even this post! If something sounds off in a piece of content you might be likely to research the validity of the content. This is something you should consider doing with content you agree with as well! Now, I’m not telling you to spend hours of research for every piece of content you find, but I am advising you to stay critical and not take everything you see as one-sided and the be all and end all.

In conclusion, filter bubbles and echo chambers might be great as a consumer, and when marketing to other consumers. But the value of diverse perspectives and knowledge are in most cases much higher than a comfortable scroll through your feed.

What’s your opinion on filter bubbles and echo chambers? Do you believe they are a good thing or do you try to diversify the content you are exposed to?

<p value="<amp-fit-text layout="fixed-height" min-font-size="6" max-font-size="72" height="80">

Beginner’s guide to On-page Search Engine Optimisation in 2021

So, you’ve set up your website/blog/online shop, it’s looking good and you’ve even posted a few blog posts, products, and/or other information. But, looking at your analytics, traffic is only coming from places you’ve shared your content, or not at all. Welcome to the reality of non-optimised Search Engine Optimisation. While you might have heard about the term ‘SEO’ and ‘Search Engine Optimisation’ and how search engines have the possibility to be your website’s biggest source for organic traffic, the differences between On-page SEO, Off-page SEO, and Technical SEO might have missed you. If you are experiencing a lack of organic search engine traffic on your website, continue reading this blog post to boost your knowledge on the essential parts of On-page SEO, and in turn boost your ranking on Search Engine Result Pages (SERPs), and organic traffic from those search engines.

What is On-page SEO?

On-page SEO, also known as on-site SEO, is the practice of optimizing the content on your page (both the written word and the HTML source code) to make it as discoverable as possible.

Lahey, 2020

On-page SEO involves everything you can do to optimise your SEO on your website. As Lahey states in a post on SEMrush in the quote above, this includes all written content, images, URLs, the website’s HTML elements, and site performance and user experience.

Why should you focus on optimising your On-page SEO?

All SEO strategies have one main goal: to increase the website’s organic traffic. The way this goal is achieved is by providing relevant and optimised SEO strategies to increase the website’s SERP, which in turn increases the chances of potential leads finding your website in their searches online. Search engines constantly use crawlers to go through websites online, by optimising your On-page SEO, you make it easier for these crawlers to crawl your content, rank it according to your used keywords and key-phrases. The higher the ranking of your website, the more likely your website is to be shown to search engine users who are looking for similar content to yours.

Optimised keywords and key-phrases

Keywords and key-phrases are the most vital part of your SEO strategy. Keywords and key-phrases refer to the relevant words and phrases aimed towards your target audiences. These keywords should be included in every relevant page on your website. However, it is important to note that these words need to be relevant and used naturally. If you attempt to mention your keywords too often, your content might risk not actually being valuable to your visitors, and seem like it’s written by a robot – this is called keyword stuffing. Your main objective when writing content an using keywords and key-phrases is to provide your visitors with value on the topic of said keywords.

How to utilize keywords for On-page SEO:

  • Research your target audience’s needs: Before even setting up your business and website you should have researched who your target audience is, what their needs are, and how to fulfil those needs. By knowing the answers to this, you are a step closer to understanding what your keywords and key-phrases are. If your target audience is looking for plant care information, your keywords should include aspects within this topic.
  • Research how your audience uses search engines: What search terms and phrases are they using? Which types of plant or aspects of care are they most frequently searching for?
  • Research and discover keyword competition: Once you have found some keywords through the first two steps you should research the competitiveness of those keywords, find relevant and similar keywords to the discovered once, and determine if the keywords are too narrow, broad or just fitting for your business and target audience. How big is the search volume for these keywords and phrases? How easy is it to rank with these keywords? In this step you should use various keyword research tools online such as keywordtool.io, Wordstream, SEMrush, MOZ, Google Trends, Google Search Console, SpyFu, and Ahrefs.
  • Select relevant and optimised keywords: Now that you have gone through some keyword research you should make a list of your chosen keywords and information about the keywords. By having a list with the information you reduce the amount of time spent research every time you want to post new content. However, you should re-evaluate the list frequently as trends, topics and searches move quickly on search engines.
  • Optimise your content using the given keywords: Now that you have your list of keywords it is time to incorporate them into your website in a natural, relevant and valuable way – without keyword stuffing. Continue reading to see where to place your keywords other than blog posts.

Optimising HTML elements:

This section is a short introduction to the HTML elements within a website which must be optimised for an well functioning On-page SEO.

  • Page names: Pages should have accurate names according to their contents. Little is more frustrating than trying to find a website’s ‘about page’ found under a random numbers and letters name.
  • Meta description: This tag provides search engines and visitors with a short summary of your website and page’s content. It is important to use relevant keywords that accurately describe the content of your website and each individual page. It is important that each individual page has their own meta description as each page should provide different valuable content. Look at the given example below on the different meta descriptions for BBC’s home page and their news page.
This image shows BBC’s home page meta description and how it shows up on Google.
Google search result on BBC's news site showing meta description
This image shows BBC’s news page meta description and how it shows up on Google.
  • Heading tags: Heading tags (H1, H2, H3…) inform search engines and visitors about the most important topics on each page. It is also a way to structure your content, which makes it easier to navigate and extract information. You should use the most relevant keywords in your headings. Looking through this blog post you can see different heading (H1) and subheading (H2, H3) tags throughout depending on which section they belong. Heading tags should be used when it makes sense and to add a functioning structure, using heading tags instead of paragraphs looks weird and confusing. Below is another example from BBC showing how they use headings sparingly to create informational content sections in their articles.
BBC article heading examples
Example of heading use in an article from BBC
  • Images: Webpage images can also be optimised for On-page SEO under the <img> og <picture> tag in your HTML codes. Image file names and image ‘alt’ text are important features as search engines cannot view images, but rely on these elements to understand the image’s content. In addition, if your visitors for some reason cannot see the images, the ‘alt’ text will describe it for them. See below for the image code with alt text.
<img src="FILE NAME" alt="IMAGE DESCRIPTION">
  • Link anchor text: When you are linking, whether it’s an internal or external link, the clickable text for the link, the anchor text, should inform visitors and search engines about the page the link is referring to. Using non-descriptive anchor text such as ‘LINK’ and/or ‘MORE’ gives users no indication of what the link leads to. Furthermore, all links you include on your webpage should be, like keywords, relevant and valuable. See below for the link code with anchor text, which will look like this in text: Elise Ols’ index page
<a href="http://Eliseols.com">Elise Ols' index page</a>
<a href="URL">ANCHOR TEXT</a>

Page speed

In 2020, Google announced that page speed and user experience is taken into consideration when it comes to your site’s SERP ranking. This means that an optimised page speed is a vital part of your SEO strategy. When it comes to measuring your page speed I recommend using Google’s PageSpeed Insights as they give you a score from 1-100 and insights into which aspects and files on your website is increasing your page speed and how to improve it. PageSpeed Insights measures both your website for desktop and mobile devices with individual scores and insights for both categories.

On-page SEO for E-commerce and online businesses:

This section is particularly for online shops and online businesses as I have seen these issues while working and evaluating different online shops in the past.

  • Use specific product titles: When selling products or services online it is important that the product title is as specific as possible. One of my previous collaborations for example was selling pet apparel, but their products were titles ‘Dress’, ‘Jumper’ and ‘Shirt’. With this the SEO competition tools I was using did not register the shop as a shop for pet apparel, but a shop for human apparel because of the lack of keywords like ‘pet’, ‘dog’, and/or ‘cat’. Therefore it is important to remember to use researched keywords in your product titles as well as elsewhere on your page. However, we’ve all seen the product titles on sites like Wish.com – ‘Creative Funny Slingshot Darts Launch Bottle Corkscrew Bar Party Gift’ is more likely to leave leads confused than converting them to customers.
  • Include detailed and helpful product descriptions: We know that customers are more likely to research products before buying. By not including product descriptions, or just having bad and non-descriptive product descriptions, you might be missing out on a better search engine ranking for that product page, but leads might be turned off by the lack of information. So remember to make time to write a relevant and valuable product description for all of your products.
  • Start business blogging: If you own an online business and you have not yet started business blogging you are missing out on a huge opportunity to continue to optimise your On-page SEO strategy. Not only does business blogging give you an opportunity to provide your users with valuable content, it also gives you an opportunity to increase you rank on SERPs by creating regular On-page SEO optimised content.

The takeaway:

Although SEO might seem scary and complicated at first glance, if you take some time to learn how search engines function and how SEO strategies might impact your rankings, you can definitively learn and understand SEO enough to boost your organic traffic from search engines.

By following these tips on On-page SEO you are well on your way to improving your website’s search engine ranking without hiring an SEO specialist.

You are more than welcome to save, like, share this post to remember in the future! Or pin the image underneath as a simple overview of some of the basic elements within On-page SEO!

Infographic on how to optimise your on-page SEO

Fake News: A Sociotechnical Concept

Reuters Institute Digital News Report (2019) confirms that the concern for distinguishing between ‘fake’ and ‘real’ news is still a topic relevant topic globally. The report shows that Brazil, South Africa, UK, Mexico and the US are some the countries most worried about ‘fake news’. The UK has the last year had the biggest increase in worrying about the credibility of news content (Reuters Institute, 2019:21).

This blog post will investigate the concept of ‘fake news’ and its implications on national and international politics. Firstly, it will present a definition of ‘fake news’. Second, it will present the two perspectives of ‘fake news’’ distribution and functions; economic and ideology. In this section I will present research on the usage of bots on Twitter, news consumption via WhatsApp, and theories on perceptions of news content. Finally, I present possible solutions to combat ‘fake news’, followed by a conclusion.

‘Fake news’, genres and subcategories

The many terms being used to describe the current worry of being misled by news in the populations of many countries (Reuters Institute, 2019) might be a reason why scholars, politicians and companies are struggling with restraining it. Tandoc et al. (2018) discusses this difficulty in an  analysis of the term ‘fake news’ in 34 academic articles between 2003 and 2017. Here, ‘fake news’ is divided into different subcategories; satire, parody, fabricated, photo manipulation, advertisement, and propaganda. The subcategories are then divided again based on a model that considers the creators’ intent to deceive consumers and levels of facticity (Tandoc et al., 2018:147—148).

Babaei et al. (2019) divides ‘fake news’ into two subcategories; misinformation and disinformation. Misinformation referes to information that is simply wrong, as in rumours. While disinformation describes information that is intentionally wrong, is meant to mislead and not verifiable. Allcott and Gentzkow (2017:214) notes that some of the subcategories presented in Tandoc et al. (2018), as well as other types of news content, are not subcategories of fake news, but rather close genres; unintentional reporting mistakes, rumours, conspiracy theories, satire, false statements, and slander.

As this post is not on the definition of ‘fake news’, rather it is on ‘fake news’’ functions and implications, the definition used in this text will be based on the model of Tandoc et al. (2018) as well as the works of Alcott and Gentzkow (2017) and Babaei et al. (2019). ‘Fake news’ in this text will be refering to ‘intentonally and verifiable false content that deliberately present itself as verifiable true news content’. The theme of fake news focused on in this post will mostly be political. The reason for this definition is it takes into account that news articles with misinformation is not fake news and will be corrected when found out about, if not then it becomes intentional misinformation, therefore it becomes fake news. It also takes into account that satire, parody and advertisement often does not fully present itself as verifiable true news content, but that some types of propaganda, fabrication, photo manipulation and disinformation often will portray itself as true news content.

Why is fake news a problem?

To further investigate the implications of fake news, as defined above, it is beneficial to understand why the concept has generated this amount of attention and concern. Normative theories on the media, specifically news media, links itself to freedom of speech, freedom of information and the public sphere (Gripsrud, 2007; Oltedal, 2008; Sejersted, 2008). In this view of news media, the media becomes an extension of the modern democracy as their ‘watchdog’. The role of the watchdog is to watch over those in power (government, politicians, policy makers, employers and so on), make sure they do not abuse these powers, and to inform citizens of their actions and provide knowledge meant to form the public into ‘informed citizens’ (Skogerbø, 2008). Within the notion of Habermas’ concept of the public sphere the media, now very much online, has become a place for citizens to voice their opinion and have a public debate (Gripsrud, 2007; Van Aelst, 2017). News media’s task then, in the roles as democracy’s watchdog and public sphere, becomes not only to distribute these public opinions, but to distribute high quality and neutral information for the public to base their own opinions on (Skogerbø, 2008:45—48).

The rise of fake news can therefore become a threat to the normative expectations of what news media are supposed to offer the public. If citizens consume fake news and believe the content, their political actions might be shaped by the fake information they are given. Furthermore, if citizens are concerned about the spread of fake news, it might lead to declining trust levels in actual high quality and neutral information given to them.

Creation and distribution: the economics of fake news

As well as the democratic perspective of the media, the media must also be viewed in an economic perspective. Media companies are often dependent on economic profits to stay afloat on the media market. Although it might seem like a combination of the two perspectives would be the most beneficial as to keep the media as democracy’s watchdog as well as staying afloat, within this perspective, the role as democracy’s watchdog takes a backseat. Producing content which generates monetary growth becomes the main objective (Skogerbø, 2012). This then means that news media will have to produce news content that will generate clicks and engagement. Such content might not qualify as high quality, neutral and valuable information for the public to base their own opinions on.

According to research by Allcott and Gentzkow (2017:222—223), social media is one of several explanations for fake news’ growth. Close to 42 percent of fake news websites’ visitors come from social media referrals, as opposed to 10 percent for top news sites where almost half of their visitors come from direct browsing. The combination of fake news websites being easily created at a low price and the possibility to monetize the content is presented as another explanation, but visitors must then know the URLs to visit these sites. It is therefore more beneficial to spread single articles on social media sites with low, or non-existing, regulations on news content to generate visitors. The creators of fake news can therefore be located in one country but spread fake news content for another country (Allcott and Gentzkow, 2017). This has created concern about different countries’ involvement in other countries’ political elections (Vox, 2018).

Twitter has been called out as a space containing fake news, particularly when it comes to its spread by ‘bots’. ‘Bots’, in this post, refers to the definition provided by Murthy et al. (2016:4955) ‘Bots, at their simplest, are social media accounts that are controlled either wholly or in part by software agents’. On Twitter, bots can be portrayed as ‘regular users’ and share content which includes fake news. The concern about bots spreading fake news on Twitter and its effect on the political opinions of the public has become a topic in the press (BBC, 2017; Vox, 2018). However, studies indicate that bots might not have such a huge impact after all. One study found that fake news on Twitter spread considerable faster and wider than true news via peer-to-peer ‘retweeting’. The study investigated the spread with and without bots and concluded that fake news still spread faster and wider when bots were ruled out (Vosoughi et al., 2018). Another look into the usage of bots on Twitter attempted to recreate bots and bot networks (Murthy et al., 2016). However, because the study’s bots were low in followers and new, and therefore lacked the social capital, they did not manage spread its content effectively.

According to Reuters Institute (2019) mobile instant messaging services (MIMs) have had a noticeable increase in terms of usage and sharing news. WhatsApp is one of the MIMs that has shown itself to be heavily used for news in countries like Brazil (53%), Malaysia (50%), South Africa (48%) and Chile (40%) (Reuters Institute, 2019:38). The topic of using WhatsApp, mostly in the form of groups, as a source for news is still relatively new, but some research has been conducted on the phenomenon (Resende et al., 2019; Valenzuela et al., 2019). In the cases of both Brazil and Chile, WhatsApp groups were used to share political news, both fake and real, during the countries’ election campaigns. In the case of Brazil, Resende et al. (2019) presented a textual analysis of the attributes of misinformation being shared on WhatsApp. Messages with misinformation included more URLs, concentrated on fewer topics (presidential candidates and government projects) and were shared more frequently by more users than messages not including misinformation. For Chile, the results based on a two-wave panel survey showed that the usage of WhatsApp for news consumption had a positive correlation with political knowledge. The study also could not confirm that the usage of WhatsApp for news increased political ‘polarisation’ (Valenzuela et al., 2019). However, the increase of political ‘polarisation’ has been presented to be another explanation for the spread of fake news (Allcott and Gentzkow, 2017).

The results generated in the case of the spread of fake news on Twitter and WhatsApp implies that the spread of fake news is a sociotechnical phenomenon. For fake news to function it needs humans (social) to utilize social media og MIMs (technology) for mobility, therefore it becomes a sociotechnical process. This leads this post to another side of the rise of fake news, which is the ideological aspect.

When trust declines: the ideology of fake news

‘Polarisation’ entails having beliefs and attitudes that are on opposite sides of a spectrum. On the political spectrum it can be described as people who are more inclined to adhere to one or the other side of the political/ideological spectre, right-winged or left-winged (Hanitzsch et al., 2018:8). Countries like the US have seen massive polarisation when it comes to trust in the media. Despite generalised trust levels at 32 percentage both in 2018 and 2019, it does not mean the underlying statistics have gone unchanged. The rise of trust in the media on the political left side has increased from 49 to 53 percent, while the political right side has had trust levels go from 17 to 9 percent (Reuters Institute, 2019:21). This leads on to the concept of ‘hostile media phenomenon’. The ‘hostile media phenomenon’ (Vallone et al., 1985) describes how consumers believe neutral news articles sympathise with the opposite political ideology of the consumers themselves. If consumers do perceive neutral news as such, it is then believed that they will perform what is described as ‘selective exposure’ and ‘selective trust’. According to extensive research, ‘selective exposure’ involves consumers actively seeking out content that better correlates to their own personal ideological positions (Knudsen et al., 2018; Gil de Zúñiga et al., 2012; Marquart et al., 2016). These are not just American tendencies. Knudsen et al., (2018) analysed norwegian attitudes towards historically political newspapers based on readers own political ideology. The results reveal that readers are more inclined to trust newspapers that align with their own ideologies.

In other words, consumers who believe neutral news articles demonstrates politically and ideologically biased content will then seek out other biased content which supports their own biased ideological and political beliefs. Within this selected exposure, the consumers will likely display higher levels of trust towards the content that support their beliefs, which is described as ‘selective trust’ (Babaei et al., 2019; Knudsen et al., 2018). In the discussion of news media one can then fit the concepts of hostile media phenomenon, selective exposure and trust, and polarisation as a confirming circle. One cannot assume that the public will blatantly accept all information given by the media. This assumption becomes close to the ‘injection-model’, where consumers will uncritically accept the media’s message (Gripsrud, 2007:52). Tsfati og Cappella (2003), in their investigation of attitudes towards media and patterns in media consumption, assumes that consumers are rational and will seek out content they trust. Their study concluded that those sceptical towards mainstream news in actuality had a more diverse news media diet. However, this did not mean that the consumers trusted what they were exposed to but had higher trust in what aligned with their own beliefs. Still,  the study admitted to not being able to enlighten if media sceptics seek out alternative media because they were sceptics, or if the increase of alternative media made them more sceptic towards mainstream media (Tsfati og Cappella, 2003:521). Another direction fake news can lead to is news avoidance. In the UK, the perceived polarised coverage of Brexit has increased news avoidance by 11 percent. As the reasons to avoid news content, not being able to rely on it being true was the third highest reason, with 34 percent (Reuters Institute, 2019:25).

In short, someone who already have politically polarised beliefs might also exhibit notions of hostile media phenomenon and therefore seek out, and have higher levels of trust in, biased news that aligns with their own political ideology, which can create higher levels of polarisation. This links into the sociotechnical way of viewing the functions of fake news. For fake news to function it requires consumers to already exhibit underlying factors that makes them believe the fake news are real.

Responsibility: Social media, MIMs or users?

Research is showing that news accessed via the internet is becoming increasingly more used as the main way to access news. In the UK and Finland, close to half the participants go to a news app or website for news, while similar numbers in the US and Italy use Social Media as the first place for news (Reuters Institute, 2019). This post has discussed the use of Twitter and WhatsApp for political news and its spread of fake news. However, statistics shows that concern about fake news and lower levels of trust in the media is a global trend. This is despite the differences in how parts of the world attain their political news. Countries that use WhatsApp for political information, as well as countries that use other primary ways of consuming news, are concerned and do experience fake news (Reuters Institute, 2019). This implies that social media and MIMs themselves are not the main issue, but the ways social media and MIMs are being used (Murthy et al., 2016; Vosoughi et al., 2018; Resende et al., 2019; Valenzuela et al., 2019). In other words, where someone gathers their information might not matter as much as from who they get it and their attitudes towards that information.

If one is to look at the research surrounding selective exposure and trust, polarisation and the hostile media phenomenon, these concepts seem to be playing a bigger role than what type of social media and MIMs is used to find the information (Vallone et al., 1985; Tsfati og Cappella, 2003; Hanitzsch et al., 2018; Knudsen et al., 2018; Babaei et al., 2019). However, this does not mean that where one gets the fake news from does not play any role. Research has shown that social media and MIMs can distribute both fake and real news faster and wider than traditional media (Resende et al., 2019; Valenzuela et al., 2019). Yet, for this content to be perceived as ‘real’ or ‘true’ it is shown that the consumers must exhibit some underlying factors in the form of surrounding selective exposure and trust, polarisation and the hostile media phenomenon.

Combating fake news

There have been several suggestions on how to combat fake news. WhatsApp has its own section on how to recognise fake news and how to act if so (WhatsApp, 2019). Facebook is working on three different ways of combating fake news: disrupting economic incentives, building new products and helping users take informed decisions (Facebook, 2017). These actions include having users and third-party fact-checking organisations report fake news on Facebook and working with projects to increase digital literacy. In an analysis of the effectiveness of strategies to combat fake news on social media sites, the results showed that labelling fake news as ‘Rated false’ was the most effective way of making consumers perceive the content as fake (Clayton et al., 2019). However, the labelling of fake news did not have a spill over effect in the way of making consumers more accurately perceive unlabelled fake news content. The strategy of generally warning consumers about the concept of fake news had a minor effect, but the spill over effect in this case negatively impacted the perception all types of news, fake and real (Clayton et al., 2019:19). Another way to regulate fake news online is blacklisting websites. NetSuccess, a Slovakian internet-marketing agency, has been blacklisting fake news websites’ access to advertisement, and therefore their economic gains, by NetSuccess clients. This blacklist has been used by over hundred thousand  campaigns (Juhász and Szicherle, 2017:25).

Because fake news is mainly created for two reasons presented above, economic and ideological, it seems most effective to combat fake news in these specific areas. In the economic aspect I suggest continuing to blacklist websites that create fake news content when it comes to advertisement. In terms of social media and MIMs labelling fake news as ‘Rated false’ seems to be the most effective way so far as to stop users from clicking on the articles and generate visitor monetisation. In this case it is also important to rely on third-party fact-checking organisations and not only on users to report the fake news, as research has shown that users’ own biases can affect their perception of what is fake news and not (Vallone et al., 1985; Tsfati og Cappella, 2003; Hanitzsch et al., 2018; Knudsen et al., 2018; Babaei et al., 2019; Clayton et al., 2019).

For the ideological perspective I have argued that it is the consumers who make the final decision to believe fake news or not. While the concept of labelling fake news on social media helped judge specific articles, it did not make the news consumers evaluate unlabelled news more accurately. It is therefore important to focus on education that can enable news consumers to recognise fake news without labels (Clayton et al., 2019:19). The media, companies and government should continue to offer advice on how to accurately judge news content. Education should look to add this concept on the curriculum (Wineburg, 2016) as soon as possible so that young people, who now grow up with online content will be better prepared for fake news when they become old enough to contribute to the public and political environments.

Conclusion

The consequences of fake news online can be argued to be somewhat moderate yet complex. If one perceives the media within its normative role, citizens are dependent on the political information given to them to make political decisions, whether it through voting, activism or voicing their political opinions (Gripsrud, 2007; Skogerbø, 2008). If we consider that the content of the fake news must match the consumers’ own political ideologies, then fake news might not be a problem when it does not align. However, when the content of fake news does align with the consumers’ ideologies, in addition to how social media and MIMs can distribute both fake and real news faster and wider than traditional media (Resende et al., 2019; Valenzuela et al., 2019), it is then concern should appear. Fake news can have the ability to increase polarisation within a population when it aligns with the ideologies of one group than the other.

As possible solutions I suggest combating fake news to block their economic gains, including blacklisting and stop visitor revenue. When it comes to the ideological aspects of fake news, I urge the media, companies and government to continue to offer advice on how to accurately judge news content, in addition to adding this to education curriculums.

Sources

Allcott, Hunt, and Matthew Gentzkow. 2017. ‘Social Media and Fake News in The 2016 Election.’ Journal of Economic Perspectives vol. 31(2) pp. 211–236. [Online] [Accessed 29th November 2019] DOI:10.1257/jep.31.2.211.

Bakshy, E., Messing, S. and Adamic, L. (2015). ‘Political science. Exposure to ideologically diverse news and opinion on Facebook.’ Science vol. 348(6239) pp. 1130—1132. [Online] [Accessed 18th December  2019]  DOI:10.1145/3287560.3287581

Babaei, M., Chakraborty, A., Kulshrestha, J., Redmiles, E., Cha, M. and Gummadi, K. (2019) ‘Analyzing Biases in Perception of Truth in News Stories and Their Implications for Fact Checking. FTA. [Online] [Accessed 29th November 2019] DOI:10.1145/3287560.3287581

BBC (2017) Massive network of fake accounts found on Twitter. [Online] [Accessed 18th December 2019]  https://www.bbc.com/news/technology-38724082

Clayton, K., Blair, S., Busam, J. A., Forstner, S., Glance, J., Green, G., … Sandhu, M. (2019) ‘Real solutions for fake news? Measuring the effectiveness of general warnings and fact-check tags in reducing belief in false stories on social media.’ Political Behavior, 1–23. [Online] [Accessed 18th December 2019]  DOI:10.1007/s11109-019-09533-0

Facebook (2017) Working to stop misinformation and fake news. [Online] [Accessed 18th December 2019] https://www.facebook.com/facebookmedia/blog/working-to-stop-misinformation-and-false-news

Gil De Zúñiga, H., Correa, T. and Valenzuela, S. (2012) ‘Selective Exposure to Cable News and Immigration in the U.S.: The Relationship Between FOX News, CNN, and Attitudes Toward Mexican Immigrants.’ Journal of Broadcasting & Electronic Media vol. 56 (4) pp. 597-615. [Online] [Accessed 29th November 2019] DOI: 10.1080/08838151.2012.732138

Gripsrud, J. (2007) ‘Mediekultur, mediesamfunn’ (Media culture, media society). 3rd ed., Oslo: Universitetsforlaget AS

Hanitzsch, T., van Dalen, A. and Steindl, N. (2017). ‘Caught in the nexus: A comparative and longitudinal analysis of public trust in the press.’ International Journal of Press/Politics. [Online] [Accessed 29th November 2019] DOI: https://doi.org/10.1177/1940161217740695  

Juhász, A. and Szicherle, P. (2017) ‘The political effects of migration – related fake news, disinformation and conspiracy theories in Europe.’ Political Capital [Online] [Accessed 29th November 2019] https://politicalcapital.hu/news.php?article_read=1&article_id=1505

Knudsen, E., Iversen, M. H. and Vatnøy, E. (2018) ‘Mistillit til den andre siden.’ (Distrust in the other side.)  Norsk Medietidsskrift, 25(2) pp. 1-20. [Online] [Accessed on 26th November 2019] DOI: 10.18261/ISSN.0805-9535-2018-02-04

 Marquart, F., Matthes, J. and Rapp, E. (2016) Selective Exposure in the Context of Political Advertising: A Behavioral Approach Using Eye-Tracking Methodology. International Journal of Communication vol. 10 pp. 2576–2595. [Online] [Accessed 29th November 2019] DOI:1932–8036/20160005

Murthy, D., Powell, A. B., Tinati, R., Anstead, N., Carr, L., Halford, S. J. and Weal, M. (2016) ‘Automation, Algorithms, and Politics| Bots and Political Influence: A Sociotechnical Investigation of Social Network Capital.’ International Journal of Communication Vol 10. [Online] [Accessed 29th November 2019] DOI:

Oltedal, A. (2008) ‘Etikk og journalistikk.’ (Ethics and journalism.) In Von Der Lippe, B. (ed.) ‘Medier, politikk og samfunn’ (Media, politics and society). 5th ed., Oslo: Cappelen Akademiske Forlag, pp. 35—59

Reuters Institute (2019) Reuters Institute Digital News Report 2019. [Online] [Accessed 19th December 2019] https://digitalnewsreport.org/

Resende, G., Melo, P., Reis, C. S., Vasconcelos, M., Almeida, J. M. and Benevenuto, F. (2019) ‘Analyzing Textual (Mis)Information Shared in WhatsApp Groups’. WebSci ’19 Proceedings of the 10th ACM Conference on Web Science. [Online] [Accessed 19th December 2019] DOI:10.1145/3292522.3326029

Sejersted, F. (2008) ‘Ytringsfriheten og informasjonsfriheten’ (Freedom of speech and freedom of information.) In Von Der Lippe, B. (ed.) ‘Medier, politikk og samfunn’ (Media, politics and society). 5th ed., Oslo: Cappelen Akademiske Forlag, pp. 20—34

Skogerbø, E. (2008) ‘Normativ teori, medier og demokrati’ (Normative theory, media and democracy). In Eide, M. (ed.) ‘Medievitenskap: Medier – institusjoner og historie’ (Media science: Media – Institutions and history). Bergen: Fagbokforlaget, pp. 39—52

Skogerbø, E. (2012) ‘Medievitenskapelig analyse – regulering av mediemarkedene’ (Media scientific analysis – regulating the media market). [Online] [Accessed 19th December 2019] https://www.regjeringen.no/globalassets/upload/KUD/Styrer_raad_utvalg/Medieavd elingen/Medievitenskapelig_analyse-regulering_av_mediemarkedene2012.pdf

Tandoc, E. C., Lim, Z. W. and Ling, R. (2018) ‘Defining “Fake News”.’ Digital Journalism, 6(2) pp. 137-153. [Online] [Accessed 29th November 2019] DOI: 10.1080/21670811.2017.1360143

Tsfati, Y. og Cappella, J. N. (2003) ‘Do people watch what they do not trust? Exploring the association between news media skepticism and exposure.’ Communication Research vol. 30(5) pp. 504−529. [Online] [Accessed on 26th November 2019] DOI: https://doi.org/10.1177/0093650203253371  

Valenzuela, S., Bachmann, I. and Bargsted, M. (2019) ‘The Personal Is the Political? What Do WhatsApp Users Share and How It Matters for News Knowledge, Polarization and Participation in Chile.’ Digital Journalism. [Online] [Accessed 19th December 2019] DOI: 10.1080/21670811.2019.1693904  

Vallone, R. P., Ross, L. and Lepper, M. R. (1985) ‘The hostile media phenomenon: biased perception and perceptions of media bias in coverage of the Beirut massacre.’ Journal of personality and social psychology vol. 49(3) pp. 577—585. [Online] [Accessed 29th November 2019] DOI: https://doi.org/10.1037//0022-3514.49.3.577

 Van Aelst, P., Strömbäck, J., Aalberg, T., Esser, F., de Vreese, C., Matthes, J., … and Papathanassopoulos, S. (2017) ‘Political communication in a high-choice media environment: a challenge for democracy?’ Annals of the International Communication Association vol. 41(1) pp. 3—27. [Online] [Accessed on 26th November 2019] DOI: 10.1080/23808985.2017.1288551

Vox (2018) Twitter released 9 million tweets from one Russian troll farm. Here’s what we learned. [Online] [Accessed on 19th December 2019] https://www.vox.com/2018/10/19/17990946/twitter-russian-trolls-bots-election-tampering

Vosoughi, S., Roy, D. and Aral, S. (2018) ‘The spread of true and false news online.’ Science vol. 359(6380) pp. 1146—1151. [Online] [Accessed on 30th November 2019] DOI:  https://doi.org/10.1126/science.aap9559

WhatsApp (2019) Tips to help prevent the spread of rumors and fake news. [Online] [Accessed on 19th December 2019] https://faq.whatsapp.com/en/android/26000216/?category=5245250

Wineburg, Sam and McGrew, Sarah and Breakstone, Jo el and Ortega, Teresa. (2016). ‘Evaluating Information: The Cornerstone of Civic Online Reasoning’. Stanford Digital Repository. [Online] [Accessed 19th December 2019] http://purl.stanford.edu/fv751yt5934