Vous êtes ici :   Accueil » RSS - Isaca.org
Prévisualiser...  Imprimer...  Imprimer la page...
Base de connaissances

 1653166 visiteurs

 2 visiteurs en ligne


Notre site



Neuchâtel, Suisse

Mes coordonées

Crée votre Code

RSS - Isaca.org

ISACA Now: Posts


RSS feed for the Posts list.

In the Age of Cloud, Physical Security Still Matters  Voir?


Sourya BiswasAs a security consultant, I’ve had the opportunity to assess the security postures of clients of all shapes and sizes. These enterprises have ranged in sizes from a five-man startup where all security (and information technology) was being handled by a single individual to Fortune 500 companies with standalone security departments staffed by several people handling application security, vendor security, physical security, etc. This post is based primarily on my experiences with smaller clients.

Cloud computing has definitely revolutionized the way companies do business. Not only does it allow companies to focus on core competencies by outsourcing a major part of the underlying IT infrastructure (and associated problems), it also allows for the conversion of heavy capital expenditure into scalable operational expenses that can be turned up or down on demand. The latter is especially helpful for smaller companies that can now access technologies that before had only been available to enterprises with million-dollar IT budgets.

Information security is one area where this transformation has been really impactful. With the likes of Amazon, Google and Microsoft continually updating their cloud environments and making them more secure, a lot of those security responsibilities can be handed over to the cloud providers. And this includes physical security as well, with enterprises no longer having to secure their expensive data centers.

However, this doesn’t mean that the need for physical security in the operating environment disappears. I once had a client CEO say to me, and I’m quoting him word for word – “Everything is in the cloud; why do I need physical security?” I responded, “Let’s consider a hypothetical scenario: you’re logged into your AWS admin account on your laptop and step away for a cup of coffee; I walk in and walk away with your laptop. Will that be a security issue for you?” Considering that this client had multiple entry points to its office with no receptionist, security guard or badged entry, I consider this scenario realistic instead of just hypothetical.

I’ve visited client locations, signed in on a tablet with my name and who I’m supposed to meet, the person was notified, and I was subsequently escorted in. Note that at no point in this process was I required to verify who I am. Considering the IAAA (Identification, Authentication, Authorization, Auditing) model, I provided an Identity, but it was not Authenticated. In fact, if somebody else signed in with my name, they would have gained access to the facility considering the client contact was expecting me, or rather someone with my name, to show up around that time.

Let’s look at one more example. One of my clients, dealing with sensitive chemicals, had doors alarmed and CCTV-monitored. However, they left their windows unguarded, with the result that a drug addict broke in and stole several thousand dollars’ worth of material.

Smaller companies on smaller budgets obviously want to limit their spend on security. And with their production environments in the cloud, physical security of their office environments is the last thing on their minds. However, most of them have valuable physical assets, even if they don’t realize it, that could be secured by spending minimally. Here are a few recommendations:

  • Ensure you have only a single point of entry during normal operations. Having an alarmed emergency exit is, however, highly recommended.
  • Ensure that the above point of entry is covered by a camera. If live monitoring of the feed is too expensive, ensure that the time-stamped footage is stored offsite and retained for at least three months so that it can be reviewed in case of an incident.
  • Install glass breakage alarms on windows. Put in motion sensors.
  • In addition to alarms for forced entry, an alarm should sound for a door held open for more than 30 seconds. Train employees to prevent tailgating.
  • Require employees and contractors to wear identification badges visibly.
  • Verify identity of all guests and vendors before granting entry. Print out different-colored badges and encourage employees to speak up if anyone without a badge is on the premises.
  • Establish and enforce a clear screen, clean desk and clear whiteboard policy.
  • Put shredding bins adjacent to printers. Shred contents and any unattended papers at close of business.
  • Mandate the use of laptop locks.

Please note that the above recommendations are not expensive to implement. While some are process-based requiring employee training, most require minimal investment in off-the-shelf equipment. Of course, there are varying degrees of implementation – for example, contracting with a vendor to monitor and act on alarms will cost more than just sounding the alarm.

In summary, while physical security requirements have definitely been reduced by moving to the cloud, it would be foolhardy to believe they have disappeared. This relative neglect of physical security by certain companies, and more, is the subject of my upcoming session at the ISACA Geek Week in Atlanta.

What other physical security measures do you think companies often ignore but would be easy to implement? Respond in the comments below.

Category: Cloud Computing
Published: 8/19/2019 1:19 PM

... / ... Lire la suite

(15/08/2019 @ 20:52)

The Film Industry and IT Security  Voir?


Barbara WabwireFor those in the ISACA community who are fans of popular culture, you might have noticed in recent years that, in many cases, film and TV stars are beginning to look more like you and I, and less like the muscle men of our youths.

Movie and TV producers have long been interested in technology – from the times of single action heroes like the one-man army of John Rambo in “First Blood” and Arnold Schwarzenegger as a cyborg assassin in “Terminator,” the film industry has been at it. But as the work performed by IT security practitioners has become more central not only to all enterprises but to society as a whole, it has been interesting to see how that realization is filtering into the big (and small) screens.

Now having more fully embraced technology-savvy heroes, the film industry portrays IT security in action-packed, fast-paced, intense scenes where IT systems are breached by a few clicks, in a matter of seconds. The nerdy programmer super-heroes are largely depicted as introvert loners, and family members of IT security characters are prone to being kidnapped, taken hostage and other forms of trauma associated with the job.

In recent times, the internet, smartphones and mobile computing technology have taken center stage in movies, mirroring their rising prominence in our daily lives. The plot in many movies no longer leads to traditional showdowns in physical locations and instead are more likely to traverse multiple virtual locations, by use of drones and closed circuit television.

In the hit TV series “24,” Joel Surnow and Robert Cochran create a character of the indomitable Jack Bauer, who relies heavily on intel from the IT security team. The team, normally just one or two very intelligent people, support all counterterrorism operations, within a command operations center with multi-screens. The protagonists target each other’s operations center as part of the main strategic battle plan. Backup plans and fallback positions become the lifeline of the movies; you have to bring all these down to win – this is the new fictional reality.

Watching the Hatton Garden TV drama, a real-life story of how the Hatton Garden (underground) Safe Deposit Company is burgled by four elderly experienced thieves. As viewers, we worry if the aging thieves will survive hunger, severe incontinence, and worse still, heart attacks. And we must wonder what really happens to the IT security personnel in such a plot during such long weekends especially over the Easter weekend.

The tension and level of precision required of IT security professionals will vary from one sector to another. IT security personnel in a bank may stress over financial loss schemes orchestrated by internal and external players, while in a law firm, the concerns might center on a data leak that could compromise the privacy and confidentiality of the clients and violate lawyer-client confidentiality, paving the way to lawsuits, reputational risk and unfathomable damage. It amounts to a matter of trust, built painfully over a long period of time, that can be brought down in such a short time. And the business world is not so forgiving (see the Panama Papers expose).

The good news is that the daily routine of a “normal” IT security practitioner is relatively mundane by comparison and would not sell at the box office. Incidentally, how many IT security professionals would pay a premium ticket price to watch “us” do our job normally? The excitement and glamour injected into the roles by the script writers may be necessary to keep us glued to our seats, but taking some creative license has long been a hallmark of film and TV producers. That should not obscure the bigger picture here – the work that IT security professionals do for our enterprises can have heroic impact, as today’s consumers of cinema and television can increasingly attest.

Category: Security
Published: 8/16/2019 2:59 PM

... / ... Lire la suite

(15/08/2019 @ 15:38)

The Key Point Everyone is Missing About FaceApp  Voir?


Rebecca HeroldMuch has been written in recent weeks about the widely publicized privacy concerns with FaceApp, the app that uses artificial intelligence (AI) and augmented reality algorithms to take the images FaceApp users upload and allow the users to change them in a wide variety of ways. Just a few of the very real risks and concerns, which exist in most other apps beyond FaceApp, include:

  1. The nation-state connection (in this case, Russia)
  2. Unabashed, unlimited third-party sharing of your personal data
  3. Terms of use give unrestricted license for FaceApp to use your photos
  4. Your data will exist forever … in possibly many different places
  5. Data from the apps are being used for surveillance
  6. Data from the apps are used for profiling
  7. Apps are being used in ways that bully and/or inflict mental anguish
  8. Using the images for authentication to your accounts
  9. Your image can easily be used in deep fake videos
10. Look-alike apps are spreading malware

I could go on, but this should provide you with a good idea of the range of risks involved. Here is an important key point not within this list that has not been highlighted in the three or four dozen articles I’ve read on the topic: the FaceApp uproar highlights a long-time problem that is getting even worse in the way that privacy policies are written.

Evolution of Privacy Policies to Anti-Privacy Policies
I’ve been delivering privacy management classes since 2002. One of the topics I’ve emphasized is the importance of organizations actually doing what they say they will do in their website privacy policies, and not using misleading and vague language to actually limit the privacy protections and increase sharing with third parties. (Privacy policies are also often referenced as privacy notices; for the purposes of this article, consider them to be one and the same.) Organizations should not use privacy policies as a way to remove privacy protections from individuals. The US Federal Trade Commission (FTC) actually published a substantive report detailing these problems in May, 2000, entitled, “Privacy Online: Fair Information Practices in the Electronic Marketplace a Report to Congress.” The advice within this report is as valid today as it was back then; in many ways even more so.

A key point made within that FTC report emphasized the need to provide clarity for collections, uses and disclosures of, and choices related to, personal data. In particular there were three significant problem areas for the findings of the FTC’s research of website privacy policies that highlighted:

1) using of contradictory language;
2) offering unclear descriptions of how consumers can exercise choice; and
3) including statements indicating the possibility of changes to the policy at any time.

From 2000 to around 2010, I saw many websites that actually tried to address these issues. This was a fairly hot topic at information security and privacy conferences then, during which time I delivered keynotes and classes specific to addressing privacy within privacy policies, and then implementing the supporting controls within the organization to meet compliance with those privacy policies.

What happened around 2011 and after? A perfect anti-privacy storm involving increased use of search engine optimization (SEO) in ways that included communicating deceptive statements in websites and their privacy policies, and a huge jump in use by the general global population into a larger number of social media sites and blogging. This led to thousands of headlines over the past decade demonstrating increasing incorporation of non-friendly privacy practices. This was soon followed by apps that integrated with virtually every type of device, server, social media site and cloud service. To succeed in these areas, rank the highest in searches, gather the most personal data to subsequently monetize, get the most likes, and get the most online amplification through partnering and sharing data with as many other organizations as possible, marketing practices were used that incorporated creative (actually deceptive) modification of privacy policies. This in large part led to why so many of the current posted privacy policies tip toward being mostly anti-privacy in the manner in which they are written, often in ways that allow for as much data to be shared with as many other third parties as possible.

FaceApp’s Privacy Policy Problems
There are many vague and problematic areas within the FaceApp posted privacy policy; take a moment to read it. See what I mean? Let’s consider the “Parties with whom we may share your information” section in particular.

  • FaceApp can share unlimited types and amounts of your information (of all types) with “businesses that are legally part of the same group of companies that FaceApp is part of, or that become part of that group (“Affiliates”).” What businesses do those include? It doesn’t say in the FaceApp privacy policy.
  • So, digging deeper, according to the FaceApp Terms page, FaceApp’s “Designated Agent” is “Wireless Lab Ltd.” with an address in Saint-Petersburg, Russia. I did not find a privacy policy or terms of use on the Wireless Lab Ltd. page. It is interesting to see their email contact listed as info@faceapp.com. So, the businesses that are “legally part of the same group of companies that FaceApp is part of” is a mystery, based on what the websites communicate.
  • Moving on to others outside of their “group of companies,” FaceApp indicates that they “also may share your information as well as information from tools like cookies, log files, and device identifiers and location data, with third-party organizations that help us provide the Service to you (“Service Providers”). Our Service Providers will be given access to your information as is reasonably necessary to provide the Service under reasonable confidentiality terms.” So, do you now know who FaceApp is sharing data with? No. Do you know the specific data that is being shared to unknown others? No.
  • Moving on … they also state: “We may remove parts of data that can identify you and share anonymized data with other parties. We may also combine your information with other information in a way that it is no longer associated with you and share that aggregated information.” Does this give you assurance? No. Why? Because the way this is written they may be sending your personal data and so-called “anonymized data” to other parties, and that information may also be combined with other information that actually could re-identify you.

This section of the FaceApp privacy policy could be reworded to have basically the same meaning as: FaceApp may share any of your information with anyone else to use however they wish. Does this sound like a “privacy” policy to you? This type of non-privacy pledge is far too common on websites.

It is also worth noting that there was:

  • Just a single sentence (“We use commercially reasonable safeguards to help keep the information collected through the Service secure and take reasonable steps (such as requesting a unique password) to verify your identity before granting you access to your account.”) describing security, and a disclaimer of any responsibility for even securing your information and preventing others from getting access to your data.
  • No apparent information about how you can access and view all your data that they’ve collected or derived from what you provided to them.

Privacy Policy Problems Not Unique to FaceApp
If you reviewed your own organization’s privacy policy, would you identify similar problems? If you find that everything does look good from a privacy standpoint, is your organization fulfilling all the promises made in your posted privacy policy? In my experience doing privacy policy PIAs over the past couple of decades, roughly 90-95 percent of organizations are NOT in compliance with their own posted privacy policy. Every organization needs to realize that they are legally obligated to fulfill the promises they make within their own posted privacy policies, in addition to all their applicable laws and regulations.

It is a good practice for every IT audit, information security and privacy officer to put an audit of their posted privacy policy on their annual plan. If you don’t, you may be added to the growing list of organizations that have been slapped within increasingly larger FTC fines for not fulfilling privacy policy promises.

Category: Privacy
Published: 8/14/2019 2:59 PM

... / ... Lire la suite

(13/08/2019 @ 18:51)

Ethical Considerations of Artificial Intelligence  Voir?


Lisa VillanuevaHave you ever stopped to consider the ethical ramifications of the technology we rely on daily in our businesses and personal lives? The ethics of emerging technology, such as artificial intelligence (AI), was one of many compelling audit and technology topics addressed this week at the 2019 GRC conference.

In tackling this topic in a session titled “Angels or Demons, The Ethical Considerations of Artificial Intelligence,” session presenter Stephen Watson, director of tech risk assurance at AuditOne UK, first used examples to define the different forms of AI. For example, it was initially thought a computer could not beat a human at a game of chess or Go in the early stages of AI. Many were fascinated to find that indeed the computer could be programmed to achieve this goal. This is an example of Narrow or Weak AI where the computer can outperform humans at a specific task.

However, the major AI ethics problem and ensuing discussion largely focused on Artificial General Intelligence (AGI), the intelligence of a machine that has the capacity to understand or learn any intellectual task that a human being can. Some researchers refer to AGI as “strong AI” or “full AI,” and others reserve “strong AI” for machines capable of experiencing consciousness. The goal of AGI is to mimic the human ability to reason, which could, over time, result in the deployment of technology or robots that achieve a certain level of human consciousness. Questions were posed to the audience such as:

  • Should we make AI that looks and behaves like us and have rudimentary consciousness? Around half (49 percent) of the session attendees polled said no – not because they felt it was immoral or “playing God” but because it would give a false sense that machines are living creatures.
  • Can morality be programmed into AI since it is not objective, timeless or universal and can vary between cultures?
  • Would you want AI-enabled technologies to make life-and-death decision? Take the example of the self-driving car. Should the car be programmed to save the driver or the pedestrian in the unfortunate event of a collision?

In what scenarios would you want the AGI-enabled device to make the decision? Assurance professionals and others have been focused on gaining a better understanding of mechanics of AI and ISACA provides guidance on the role IT auditors can play in the governance and control of AI. However, it became apparent, after this thought-provoking GRC session, that considerations such as the following should also be seriously considered and discussed to ensure ethics and morals in the development and use of AI are not forgotten in the effort to harness this technology:

  • What rules should govern the programmer, and to what extent should the programmer’s experience and moral compass play into how the AGI responds to situations and people?
  • What biases are inherent in the data gathered and upon which the AGI is learning and making decisions?
  • How to evaluate the programs and associated algorithms once the machine has gained the ability of the human to comprehend, such as Blackbox AI?

The session intentionally stayed away from a deep discussion on the mechanics of the technology to foster the dialogue and thinking necessary to reflect on the ramifications, pro or con, of this growing technological capability, its future direction, and its impact on our business and social lives.

Over time, less and less technologies will be considered part of AI because their capabilities will be considered so much a part of our daily life that we won’t even think about it as AI. This was referred to as the “AI Effect.” Let’s not hesitate to ask the tough questions to ensure we are responsible and ethical in our development and use of this amazing technology as it continues to integrate into our daily routines to make our lives easier.

Share your thoughts on the ethics of AGI and other emerging tech in the comments below. We would love to hear from you and see you at the 2020 GRC conference, planned for 17-19 August 2020 in Austin, Texas, USA.

Category: Risk Management
Published: 8/15/2019 9:58 AM

... / ... Lire la suite

(14/08/2019 @ 23:04)

Auditing a Migration Plan When Transferring from On Site to the Cloud  Voir?


Katsumi SakagawaHave you ever audited a computer system’s migration plan when transferring it from on site to the cloud? Here are some recommendations to keep in mind based on lessons learned from migration practices:

Clarify the work burden mitigation effort. Once cloud migration is complete, it is important to clarify what burden has been mitigated by the migration from on site to the cloud; for example, automatic scalability. If the company’s computer infrastructure system meets the requirements for automatic scaling service, it can enjoy not only the service, but also cost savings. A computer system, like many single physical servers and few virtual system environments, has to address mitigating the operational burden and full treatment.

Verify there is no loss of security functions. A cloud vendor provides various security services; however, when transferring to a cloud environment, companies should examine whether any security services and circumstances that were addressed on site were lost or downgraded. For instance, if a company currently runs a laboratory-typed anti-virus sand boxing system, AI-based filtering system or industry-needed scoring system as a firewall, it should check whether the system can transfer onto the cloud vender’s service, as well as how it is priced.

Find out the current application’s operation system and the infrastructure for the system, and determine whether it is possible to migrate them directly to a cloud environment. If the target application the enterprise is seeking to shift is a specialized legacy OS for which the cloud vendor doesn’t support service, it may need to migrate the legacy OS first.

Finally, look at the risk mitigation procedure that will lead to the systems going live on the cloud. There are many existing layers, such as the internet connection layer, the OS infrastructure, middleware, application infrastructure, application server and application scheme. A company can’t help addressing them without upgrading them. Each layer requires its own upgrading activities and tests. It might be important to plan a step-by-step migration schedule. To migrate all at once is not always the best solution. In addition, when considering risk mitigation, Rollout and Rollback procedure should be designed by the user. The most risk-sensitive person is the user, and the user should be responsible to mitigate hazards.

Category: Audit-Assurance
Published: 8/13/2019 2:59 PM

... / ... Lire la suite

(12/08/2019 @ 15:25)

The Digital Age: A New World of Purpose-Driven Opportunity  Voir?


Jon DuschinskyEditor’s note: Jon Duschinsky, an entrepreneur, social innovator and firm believer in leading a purpose-driven existence, will be the closing keynote speaker at ISACA’s EuroCACS/CSX 2019 conference, to take place 16-18 October in Geneva, Switzerland. Duschinsky recently visited with ISACA Now and shared his thoughts on why being purpose-driven is more realistic than ever in today’s digital age. For more of Duschinsky’s insights, listen to his recent appearance on the ISACA Podcast.

ISACA Now: Why is being purpose-driven so important for professionals?
Purpose-driven means that you’re clear on what you do, you’re clear on how you do it, and most companies and professionals are pretty clear on those two today. The bit that tends to get lost in all this is why. Why has this group of human beings come together in this corporate structure to do the thing they do? And if the answer of that is to make profit, that is not the why. That is a result of the why. And so purpose-driven companies are companies that have understood what that why is and have gotten really clear what their purpose is – why they get up in the morning, why they work, why they innovate, why they create, why they make their product, why they serve their customers. … And the truth today is you make more money by making a difference.

ISACA Now: You have been described as a serial entrepreneur. What do you find so intriguing about entrepreneurship?
I kind of go through life encountering with curiosity, seeing things, seeing concepts, seeing ideas, and drawing connections between them – perhaps connections that others haven’t seen before or connections in new ways, and then from that being able to sort of articulate a vision to turn that connection into something that can be communicated and articulated – ‘What if we did this?’ – and then it’s really about enrolling and inspiring others to say ‘Oh, that would be cool, why don’t we all try and do that together?,’ because the first thing about entrepreneurship is you can’t be an entrepreneur on your own.

ISACA Now: In one of your past presentations you cited a statistic that nearly half of all jobs will be replaced by technology in the next 10 years, at least in some regions. What comes to mind when you think about that type of jarring possibility?
What’s happening is that today we don’t need people to work the machines anymore, and that’s the seismic shift. Shifts of that scale and their ripple effects are going to be felt in every family, in every community, in every company. There’s a lot of talk about how the US and other developed countries are heading toward massive levels of structural unemployment. That is false. I do not believe we are going there. … What we need to get as the humans are no longer needed to work the machines is to then tap into something that is fundamentally human, which is that when we are given a little bit of freedom from needing to work the machines, when we’re given the breathing room from needing to ensure our basic survival, then humans are free to seek more meaning. … We are seeing this new world of opportunity where human beings are actually given the time and the space to be able to pursue the things that matter most to them.

ISACA Now: What are the biggest keys to successful communication when it comes to social innovation in the enterprise setting?
The keys to communication come back to this idea of clarity on the why. It’s very easy to talk about what you do and how you do it. But actually, the communication at that level is fairly transactional, and it’s about the process and it’s about the what we’re doing. It’s a set of tasks. We’re communicating at a tactical level. When you get real communication, when you get real connection, what you get is something called enrollment. It’s buy-in. When you really communicate effectively with people, you get  not just their understanding of them sitting there nodding and they know how to execute the tactics or the process, but you get their buy-in at almost an emotional level with the thing that you’re sharing. They really get it. And to get it you have to understand not just what it is and how it works, but you have to understand and be aligned on why it’s important. That’s the really critical piece, and it’s what differentiates so many companies that do this.

ISACA Now: How will the educational system need to adapt to keep pace with the evolving technology landscape?
When I spend time with CEOs and business leaders, they’re fairly unanimous in the [opinion] that the educational system, at all levels now, is not fit for purpose. … We need education systems and education styles that enable young people, and this starts very young, to grow into their creativity rather than having it tested and normalized out of them. That’s the first thing. The second thing is that we need to have a realignment between higher education and the needs of the workforce, which means that the two have to be much more closely aligned. 

Category: ISACA
Published: 8/12/2019 2:58 PM

... / ... Lire la suite

(09/08/2019 @ 18:57)

Dernière mise à jour : 20/08/2019 @ 17:43