Learning Outcomes

Research Methods and Professional Practice

AI generated with DALL-E 3
  1. Appraise the professional, legal, social, cultural and ethical issues that affect computing professionals
  2. Appraise the principles of academic investigation, applying them to a research topic in the applicable computing field
  3. Evaluate critically existing literature, research design and methodology for the chosen topic, including data analysis processes
  4. Produce and evaluate critically the resulting research proposal for the chosen topic.

Reflection

Ethics in Computing in the Age of Generative AI

Top 5 Principles

Topic

Global consensus on ethical Artificial Intelligence (AI) policies.

Outcomes

1: The CorrĂȘa et al. (2023) dataset lacks generative AI's impact and needs industry analysis. Recent advancements, like the Bletchley declaration and EU AI Act, step toward accord and enforcement (European Parliament, 2023; GOV.UK, 2023). Industrial research shows balance is needed between human-centric and innovation, and interoperability is essential to global consensus (Holistic AI, 2024; OECD.AI, 2024).

Feedback

'Strong academic writing style' and good visuals, however, add further criticality.

Outcomes

2: Used Miessler's (N.D.) inductive (observation then idea) or deductive (idea then observation) reasoning.

Feedback

100%

Collaborative

Code of Ethics and Professional Conduct

Relevant ACM and BCS codes

Topic

Association for Computing Machinery (ACM) (N.D.) ethics case study.

Outcomes

1: Analysed Abusive Workplace Behaviour with six ethical code violations. Identified category (social, professionalism and legal), and compared ACM (2018) to British Computing Society (BCS) (N.D.). Countered assertion abusee was arrogant, which misrepresented Milyavsky et al. (2017), potentially victim blaming and reframing as per LaVan and Martin (2021).

Feedback

'Insightful' and 'comprehensive and well-structured' but requested more enforcement and mental health impact. One peer argued victim was an 'arrogant high achiever' and 'unethical' for not reporting further.

Topic

Implementing deep learning tools and/or techniques in media and entertainment recommendation systems.

Outcomes

2: Followed University of Essex Online (UoEO) (N.D.) questions and Paul and Criado (2020) methodologies.

Feedback

Narrow to primary topic.

Reflection

Case Study: Inappropriate Use of Surveys

META stock

Topic

Exemplify inappropriate survey use beyond Cambridge Analytica (Afriat et al., 2020).

Outcomes

1: Push polls are also unethical (Murphy et al., 2021). However, users still want social media (Afriat et al., 2020). Academic research appears more effective when combined with industry. Yahoo Finance's (N.D.) stock graph reflected the lack of regulatory and ethical impact more succinctly than Wagner's (2021) penalty and privacy discussion.

Feedback

'Well paced and reflects key facts in a direct manner'. Good debate, citation, graphics, and layout.

Questionnaire critique

Outcomes

3: Critiqued Camden Council's (2023) questionnaire using Breitling's (2018) 'The 7 Deadly Survey Questions'. Discovered leading, assumptive, and pushy questions designed to improve sustainable behaviour, not gather accurate data.

Collaborative

Accuracy of Information

Topic

Highlight legal, social and professional impacts of ethical choices.

Outcomes

1: Used Holmes et al. (2017) and Berenson et al. (2019) to demonstrate ACM (2018) code of conduct breaches. The Andrew Wakefield case illustrated researcher accountability (Hasnain, 2013).

3: A peer raised Godlee et al.'s (2011) identification of Wakefield's pharmaceutical lawsuit conflict of interest. I discerned Godlee et al.'s (2011) publisher's conflict of interest due to pharmaceutical funding, thereby reducing credibility. Another peer's differing initial post viewpoint demonstrated how researcher bias or error can change perspectives (Saunders et al., 2019).

Feedback

'Ideal real-world example' but more explicit responses requested.

Literature Review

Topic

Implementing deep learning techniques in media and entertainment recommendation systems.

Outcomes

What (2)

While search strategy guidance was practical, writing guidance felt philosophical (UoEO, N.D.; Ermel et al., 2021). Therefore, I used Dawson's (2015) outline: literature overview, critical evaluation, wider context and gap identification. Although media is my industry, deep learning was new to me. Academic sources like Zhang et al. (2021) provided technical overviews, whereas industry sources like Netflix's Steck et al. (2021) delivered practical application.

So What (3)

Restricting Google Scholar (N.D.) search terms focused analysis while exploring both industry and academic perspectives. Discovering Hammock's (2018) table use simplified analysis and presentation, while graphical representation clarified relationships. However, Dawson (2015) separated out critical evaluation, so feedback requesting criticality throughout was confusing.

Now What

Feedback taught me to avoid bullet points and add more critical analysis. Additionally, I find contrasting academia and industry valuable and will continue this practice.

Feedback

80% (Distinction). 'The level of knowledge provided is outstanding'. 'Outstanding structure and presentation', including visuals. However, add more context to beginning, include 'critical pros and cons' throughout, and avoid bullet points.

Topic

The impact of large language models in media and entertainment.

Outcomes

2: Primarily followed Dawson (2015).

Feedback

Unfamiliar topic suitable and beneficial as it lacks 'unconscious' knowledge.

Outcomes

What (3)

Although I received a distinction in Numerical Analysis, I found this section frustrating because of errors in the material. Descriptive analytics (unit 6) documentation used deprecated Excel formulas, so I applied Berenson et al.'s (2019) modern equivalents. We reported issues and shared fixes via the WhatsApp group I created. For example, I shared Analysis group installation instructions, necessary for interpretive statistics (unit 7) like t-tests. I extended my analysis with a normality plot and a Jarque-Bera normality test. Despite missing assignment details, I inferred and completed the task. Charts (unit 9) were interesting; I learned new features, created a frequency histogram, and additionally created a box plot to confirm outliers.

So What

The errors and omissions negatively impacted my learning experience. However, solving these issues and exceeding the assignment requirements enhanced my problem-solving skills and strengthened peer relationships. The experience highlighted the importance of accurate and up-to-date instructional materials.

Now What

In the future, I can revisit Excel statistics instructions. I will continue identifying errors to improve learning experiences and collaborating with peers to enhance shared understanding.

Feedback

Submitted. None.

Table 3 Table 4

Topic

The impact of Large Language Models (LLMs) in media and entertainment.

Outcomes

What (3)

Industry created LLM's foundations with Google's Vaswani et al.'s (2017) paper on Transformers, while academia provided in-depth explanations, like Raiaan et al.'s (2024) review. Industry sources like PWC (2023) detailed media and entertainment but not LLMs, while academic papers articulated LLMs with limited research across media and entertainment.

So What (4)

I was surprised that neither Dawson (2015) nor Saunders et al. (2019) fully articulated the research proposal methodology. My Table 4 diagram, however, provided an overview of options and choices. Omission of Preferred Reporting Items for Systematic Reviews and Meta-Analysis (PRISMA) was unexpected, given its common use in literature reviews (Page et al., 2021). The volume of LLM papers, many preprints, was challenging. Using a spreadsheet to track Google Scholar (N.D.) searches and Table 3 to visualise the research gap was essential for structuring the proposal.

Now What

Moving forward, I will use Table 4's comprehensive methodology, aggregating strengths from Dawson (2015) and Saunders et al.'s (2019) with Page et al.'s (2021) PRISMA guidelines. These, along with ethical considerations and risk analysis will enhance the validity and reliability of my research (Saunders et al., 2019). I will continue using tools like spreadsheets, visuals, and tables to analyse data. Using both industry and academic sources remains essential to critically evaluating expertise.

Feedback

TBC

Outcomes

What

My SWOT analysis captured my strengths as a media technology professional with 30 years' experience, opportunities for public speaking, and service as a student representative and module WhatsApp facilitator. It also noted my weakness in balancing quality with self-care, and the threats of course errors and less relevant material (MindTools, N.D.). My skills matrix showed growth in ethics, critical writing, research methods, and statistical analysis in Excel. I enthusiastically engaged in assignments relevant to AI in media.

So What

Despite these achievements, much of the coursework focused on methodology, writing, mathematics, and reflecting, requiring me to seek AI expertise through external sources. Furthermore, module errors hindered efficiency. Balancing coursework with staying relevant meant dedicating time to less relevant topics, impacting self-care and professional growth.

Now What

My action plan aims to mitigate these challenges (Cottrell, 2021). I will continue to combine industry and academic research, build expertise through public speaking, and supplement with external training. My student WhatsApp group supports collectively addressing course errors. However, as I prioritise knowledge, academic quality, and timely delivery, my biggest challenge remains allocating time for self-care.

Outcomes

What

This e-portfolio reflected on research method processes. Appraising issues (1) through ethics and survey use reflections showed practical insights from industry research, such as OECD.AI's (2024) AI Principles, were often missing in academic contexts. Moreover, employing academic investigation (2) in the literature review demonstrated the need for industry examples like Steck et al. (2021).

So What

The collaborative learning loop process highlighted the importance of thorough review and critical evaluation (3), as seen in the misinterpretation of Milyavsky et al. (2017) and conflict of interest by Godlee et al.'s (2011) publisher. These exemplified the potential for researcher bias or error. Critically evaluating Dawson's (2015) and Saunders et al.'s (2019) methods (4) for the research proposal helped me ascertain suitability. Discovering Page et al.'s (2021) PRISMA guidelines and comparing industry and academic research helped me identify a critical research gap.

Now What

I will apply critical evaluation skills to all stages of my research, using tools like PRISMA guidelines to systematically structure reviews. I will continue seeking feedback to refine my writing and critical thinking skills. Integrating industry and academic research will help me maintain a balanced perspective and ensure professional applicability.