Before signing the application you should read the basic information on data protectio here
Basic information about Data Protection
Responsible Party ICAR VISION SYSTEMS, S.L.
Purpose Commercial research.
Recipients Data may be provided to other companies in the group and to third party companies in the technology sector.
Rights Access, rectification, cancellation, opposition, limitation of processing, data portability, and to not be the object of automated individualized decisions.
It’s not a new phenomenon – being on Twitter or Instagram and someone you’ve never heard of adds you or sends you a message. Or how about a Facebook or LinkedIn request pops up from someone you think you know from your company or gym – but you can’t be sure. More often than not, you’ll soon get word that you shouldn’t accept any requests from this ‘fake account’.
Platforms have been taking action: 500 accounts were removed from Facebook and Instagram for inauthentic behaviour and suspected Russian links, and Instagram recently also announced its aim to overcome the issue of fake influencers by stamping out bots and fake followers. Whether the motive is using fake news to influence political views and policy, hacking to access financial accounts for money laundering, or cyberbullying or spreading falsehoods to damage the reputation of the victim, there are myriad reasons that fake profiles plague the web as we know it.
Fake social media accounts are a critical problem for our society as a whole. Businesses that are breached from hackers face significant damage to reputation and monetary assets. Some organisations could even find themselves facing fines for not adhering to Know-Your-Customer (KYC) compliance requirements. Consumers that are targeted can have life-changing consequences. Think about all the other apps and sharing economy platforms that use social media account details for authentication or sign-in purposes – those are at risk, too. This means it’s down to all platforms to defend themselves against fake identities to protect its users and business partners, alike.
For the wider economy, there’s a whole host of damage that fake identities and accounts could do to users on e-commerce platforms, peer-to-peer lending spaces, even primary ticketing sites and ticket reselling marketplaces. Not only could it lead to hacks and information leaks, higher prices for limited goods purchased and resold by bots - but also to financial fraud and money laundering. Consumers and small business owners are naturally wary, which leads to an overall business slowdown.
To combat the lack of trust, as well as physical and cybersecurity concerns, businesses are stepping up their online security and new user onboarding to introduce digital identity verification procedures.
Benefits abide. On marketplaces, such as Airbnb or eBay, what took a user months to build their profile up and instil trust though multiple peer-to-peer reviews, can now be done overnight. Once a new user’s profile is digital ID-verified, users are instantly more comfortable trusting a ‘stranger’ or a platform newcomer before they amass a dozen of reviews. We could go further and imagine the future where we’re able to match digital identities to real people almost instantly. In time, this could transform live entertainment, travel and hospitality, making it safer and more convenient for users.
Imagine, going to see a high-profile football game or concert with your family and kids, without being concerned about the potential presence of hooligans or disruptors in the crowd? We’re already seeing some instances when pop stars, like Taylor Swift, turn towards facial recognition technology to scan the crowd for stalkers.Possibilities for improving our experience and public safety in the future are endless. But it's going to take some time for technology to catch up, as we continue fixing the bare identify verification ‘basics’.
Today, our battle ground is still defending against fake users on digital platforms, inhibiting global e-commerce and impeding the sharing economy. Alongside fake identities, ‘dormant’ or historically unverified identities represent potentially even a bigger challenge to the economy. As new regulations kick in, legacy customers in traditional long-standing industries such as banking, have to have their identities re-verified in line with incoming regulations – before they are able to transact online – even if their identities are real.
One such example is the EU Payments Services Directive (PSD2) that took effect in January 2018. To combat payment fraud and associated losses, a key element of PSD2 is the impending introduction of additional security authentications for online transactions over €30 coming into force on 14 September 2019. Meaning that many banks will have to conduct rapid legacy customer re-authentication to ensure they have all the required information, including digitally verified ID, on file ahead of the deadline.
Whether businesses are tasked with weeding out fake identifies on social media or simply verifying their legacy customer base, they can work with identity verification companies to implement a first form of defence and ensure regulatory compliance. Through identity verification technology, users must use a smart device to take a photo of an ID, such as a driver’s licence or government-issued document, and then take a selfie. AI-enabled software can then inspect the document to determine if it’s authentic and unaltered. The technology then does a biometric face comparison of the photo on the document to the selfie tying the person to the document, thus proving their real-world identity.
Anything and everything that happens online should include identity verification, in particular those platforms interacting with the sharing economy and e-commerce. Businesses can only properly protect their communities from users who pose a real threat by verifying the identity of every customer. Rather than letting this erode consumer trust and thus business success, it’s time to do what’s best for consumers and take control of fake users.