TODO

TODO in MD cloud

Project Planning

Project Plan: A Field Experiment on GenAI Reshaping Prototyping

1. Project Goal: To systematically evaluate how different modalities of Generative AI (integrated tools vs. conversational agents) impact the quality, efficiency, and innovation of the business prototyping process compared to traditional methods.

2. Estimated Project Timeline: 24 Weeks (approximately 6 months)

3. Project Phases and Timeline

This project is structured into five distinct phases, each with clear objectives, activities, and deliverables.

PhaseTitle & DurationObjectives & Key ActivitiesMilestones & Deliverables
1Setup & Preparation (Weeks 1-4)Objective: Finalize the experimental design and secure all necessary resources and approvals. Activities: - Finalize the detailed research protocol and all experimental materials (e.g., business challenge, surveys). - Recruit and brief the expert panel for prototype evaluation. - Procure and configure all necessary software licenses (Figma, Uizard.io, AI agent API access). - Submit the research proposal for Institutional Review Board (IRB) or internal ethics committee approval.- Finalized Research Protocol - Confirmed Expert Panel - Fully functional technology stack - IRB/Ethics Approval
2Pilot Study (Weeks 5-6)Objective: Validate the experimental procedure, instruments, and technical setup on a small scale. Activities: - Recruit a small cohort of participants (e.g., 6-12 individuals) representative of the target profile. - Run the full experiment with the pilot group, covering all conditions. - Collect feedback on task clarity, survey length, and technical issues. - Perform a preliminary data analysis to check for floor/ceiling effects.- Pilot Study Report - Revised and validated experimental materials - Go/No-Go decision for the main experiment
3Main Experiment & Data Collection (Weeks 7-12)Objective: Execute the full-scale field experiment and collect all raw data. Activities: - Launch a broad recruitment campaign through corporate partners and professional networks. - Schedule and onboard all participants. - Conduct the experimental sessions for all six conditions, ensuring protocol fidelity. - Securely collect and store all outputs: submitted prototypes, survey responses, screen recordings, and interaction logs/transcripts.- Full cohort of participants recruited - Completion of all experimental sessions - Complete, raw dataset (quantitative and qualitative)
4Data Analysis & Evaluation (Weeks 13-18)Objective: Process and analyze the collected data to test the research hypotheses. Activities: - Distribute prototypes to the expert panel for blind scoring against the defined rubrics. - Transcribe qualitative data from interviews and interaction logs. - Perform statistical analysis on quantitative data (e.g., ANOVA, regression models) to compare conditions. - Conduct thematic analysis on qualitative data to identify patterns in user experience and strategy. - Synthesize findings from both quantitative and qualitative analyses.- Prototype quality scores from expert panel - Coded qualitative data - Completed statistical analysis report - Synthesized key findings document
5Reporting & Dissemination (Weeks 19-24)Objective: Communicate the research findings to academic and industry stakeholders. Activities: - Draft a full manuscript for submission to a target academic journal (e.g., MIS Quarterly, CHI Conference Proceedings). - Create a concise, visually-driven business report and presentation for project sponsors. - Develop presentations for academic conferences and industry events. - Prepare a public-facing summary or blog post to share key insights more broadly.- Draft of Academic Manuscript - Final Report & Presentation for Sponsors - Conference submission(s)

4. Resource Planning

  • Personnel:

    • Principal Investigator (PI): To oversee the project, lead analysis, and co-author publications.
    • Research Assistants (2): To manage recruitment, scheduling, conduct experimental sessions, and perform initial data processing.
    • Expert Panel (3-5): Industry experts (Product Managers, Senior Designers) to evaluate prototypes.
  • Tools & Technology:

    • Prototyping Software: Licenses for Figma (or similar), Uizard.io (or similar).
    • AI Agent: API access to a state-of-the-art LLM (e.g., GPT-4, Claude 3) with a custom-built interface.
    • Survey Platform: Qualtrics or a similar platform for questionnaires.
    • Statistical Software: R, Python, or SPSS for data analysis.
  • Participants:

    • Target N of ~120-180 participants (20-30 per condition) to ensure statistical power.
    • Compensation/incentives for participants and the expert panel.

5. Risk Management

  • Recruitment Delays: Mitigation via multiple recruitment channels and strong relationships with corporate partners.
  • Technical Failures: Mitigation through extensive pilot testing and having technical support on standby during sessions.
  • Inconclusive Results: The 2x3 design is robust, but if some hypotheses are not supported, the rich qualitative data will still provide valuable insights into why the results occurred, ensuring a meaningful project outcome regardless.

Networking

https://gemini.google.com/app/34ad4d0813c80573

Subject: Research Collaboration: TU/e & [Recipient’s Company Name]

Hi [Recipient’s Name],

It’s a pleasure to meet you.

My name is VM, and I am a PhD candidate at Eindhoven University of Technology (TU/e), focusing on AI and Product Management.

One of the papers I am working on seeks to be the first scientific benchmark to show exactly how different generative AI tools transform prototyping. It aims to provide the definitive roadmap for companies that want to lead the AI revolution by comparing three conditions: Traditional Prototyping, Prototyping with Generative AI, and Collaboration with an AI Agent.

I am writing to ask if you would be open to a research collaboration—specifically, by granting access to your tool for the 200 participants on the day of the research execution workshop.

Would you be open to discussing this further?

Thank you, VM

Solutions Study

Estou na duvida.. se uso tudo no slack de uma vez..

https://gemini.google.com/app/7219a081fa485e3c

Novo Grupo - Worksshop Friendly

https://gemini.google.com/app/2f9a39d7463603da https://www.sessionlab.com/

Video Conference

https://docs.daily.co/reference/daily-react

https://getstream.io/video/sdk/react/

https://cloud.livekit.io/login?r=%2F

Referencia de projeto e UI aqui: https://cloud.livekit.io/projects/p_plzgqiq852m/sandbox/templates/meet

Support Chat

Slack https://www.coscreen.co/coscreen-slack-integration

is is possible to include a chat widget in a custom react application that is able to send and receive messages from a slack channel?

https://www.socialintents.com/blog/how-to-add-live-chat-to-your-website-using-slack/

https://chatlio.com/app/#/setup/84439

Cloud Browser

https://kasm.com/cloud-personal

Co-Browsing

WebFuse:

Question:

  • How to open a session with the tools already authenticated
  • “can_host”: True in magic link api
  • https://chatgpt.com

Explicar para ele melhor.. que tem que abrir a sessao entao (API)

unsual trafic https://dev.webfuse.com/session-access/#_top

Top Surfly alternatives for co-browsing and remote support include ==UpscopeFullviewCobrowse.io, and Glia==, which focus on real-time visual collaboration without installations. For broader remote support, top options are TeamViewer, AnyDesk, and ConnectWise ScreenConnect. These tools enable secure, session-based screen sharing and co-browsing. 

Top Co-Browsing & Visual Engagement Alternatives

  • Upscope: Often cited as a direct, user-friendly alternative for screen sharing and co-browsing.
  • Fullview: Strong for website-based troubleshooting and customer support sessions.
  • Cobrowse.io: Specialized in real-time co-browsing with strong mobile SDK integrations.
  • Glia (formerly Salemove): A comprehensive platform integrating voice, video, and co-browsing for customer service.
  • Recursive Labs: Focuses on secure visual engagement, including co-browsing and screen sharing.

LMS

https://www.google.com/search?q=embedded+lms&oq=lms+embe&gs_lcrp=EgZjaHJvbWUqCAgBEAAYFhgeMgYIABBFGDkyCAgBEAAYFhgeMggIAhAAGBYYHjIICAMQABgWGB4yBwgEEAAY7wUyBwgFEAAY7wUyBwgGEAAY7wUyCggHEAAYgAQYogQyBwgIEAAY7wXSAQg2NDUzajBqN6gCCLACAfEFftBMcDBQ4CE&sourceid=chrome&ie=UTF-8

https://www.iseazy.com/br/teste-gratis/

https://coassemble.com/dashboard/courses

The LMS is no longer enough: The shift toward embedded learning - Coassemble https://coassemble.com/learn/JV8ND7N6VIFYSRH2SH1LASTT72DH64AB ` can i embed a course into my website?

Yes, you can embed a Coassemble course into your website using a few different methods! Which approach would you like me to guide you through? Sharing a Course Link – This is the simplest way to share your course, and you can add the link to your website. API Embed – This option allows you to embed courses directly into your product, app, or website for a seamless, branded experience. Embed Screen – You can use an Embed screen within a Coassemble course to integrate third-party tools like surveys or forms, which could include content from another source. Template Links – This allows others to copy your course as an editable template into their own Coassemble workspace. Which option works best for your situation?

Sources Toggle API Embed

API Embed allows you to embed Coassemble courses directly into your own product, app, or website, providing a seamless and fully branded learning experience. This means no external links or switching tabs for your users. With API Embed, you have full control over the look, feel, and function of your courses within your environment. It also offers real-time progress tracking and secure data syncing. This feature is ideal for companies looking to: Onboard new users Train customers Run a member academy Since API Embed is an advanced technical feature, it’s typically part of our Enterprise solutions. Would you like to explore how this could work for your specific needs, or are you an existing Coassemble user?

Sources Toggle Run a member academy

To help you with running a member academy using API Embed, I’ll need a few more details to connect you with the right team. Could you please provide the following: First Name: Last Name: Email: Company: Great! Running a member academy with API Embed means you can seamlessly integrate your Coassemble courses directly into your platform. This allows your members to access training and educational content without ever leaving your site, ensuring a consistent brand experience. We’ve received your inquiry about embedding a course for your PhD Research Project. We’ll be in touch shortly to provide more details on how we can support your experiment with this functionality.

 Testing with localhost

Webfuse doesn’t support direct access to localhost or 127.0.0.1, since these point to your own machine and can’t be reached by Webfuse’s servers.

To test your local app inside a SPACE, you’ll need to expose it using a tunneling tool like:

`

Research Planning

Planning from Harvard Article with Manus: https://manus.im/app/uokXb2gS38YtGJAUtFWawG

PhD02 - Harvard Paper https://notebooklm.google.com/notebook/b25ffac8-41a9-442b-a7b8-56f5819bb17b

Inspirations

The Cybernetic Teammate: A Field Experiment on Generative AI Reshaping Teamwork and Expertise

THE CYBERNETIC TEAMMATE: A FIELD EXPERIMENT ON GENERATIVE AI RESHAPING TEAMWORK AND EXPERTISE (version NATIONAL BUREAU OF ECONOMIC RESEARCH) https://www.nber.org/system/files/working_papers/w33641/w33641.pdf)

Paper

Methodology

https://gemini.google.com/app/692e6fab076db169

Pre-notes

200 pessoas / 9 participantes por bloco - 22 e pouco 22 vezes 6 outputs de cada bloco = 132 solucoes

22 sessoes com designer = 6 designers = 4 sessoes por dia

8-10 10-12 14-16 16-18 $10/month

33 Simultaneos Coisa pacas 6 pessoas organizando

Metrics

Definition of pillars (3 or 4), with AI Collaboration, table outcome metrics, artifacts, https://gemini.google.com/app/2b18d839eccf973d

From NLLM: https://notebooklm.google.com/notebook/b25ffac8-41a9-442b-a7b8-56f5819bb17b

The Technicality of a solution is a measure used in the research to assess the orientation of the ideas proposed by participants in the new product development task.

Key characteristics and context regarding the Technicality of a solution from the sources include:

  • Measurement: It is assessed using a 1-7 Likert scale. The scores were assigned by the same human expert evaluators who assessed the overall quality of the solutions.
  • Scale Interpretation: Higher values on the scale indicate ideas that are more technically oriented. Conversely, lower values suggest commercially oriented, market-focused concepts.
  • Purpose in the Study: The technicality score was used to capture how expertise was configured and deployed by participants. It helped illustrate the degree of functional boundaries present in idea generation.
  • Findings Related to Technicality: The study found that without AI, R&D professionals tended to suggest more technical solutions, while Commercial professionals generally proposed commercially-oriented ideas. However, when using Generative AI, professionals produced balanced solutions, regardless of their professional background, indicating that AI helped participants bridge functional boundaries.

Length and AI Configuration NLLM

Performance: Others metrics to consider Evaluation of usability, functionality, visual design, and alignment with the problem requirements.

The variable Length measures the total number of words in the solutions submitted by participants. It is used as a performance metric to understand how AI and team configuration affect the comprehensiveness and detail level of the proposed solutions.

The relationship between Length and other key performance metrics, based on the study findings, is strongly tied to the adoption of Generative AI:

Length and AI Configuration

The most substantial finding regarding solution length is the effect of AI assistance:

  • Substantial Increase with AI: The introduction of AI led to substantially longer outputs.
    • Solutions from Individual + AI participants were significantly longer than the control group (individuals without AI), showing an increase of approximately 504.5 words.
    • Solutions from Team + AI participants were also significantly longer than the control group, increasing by approximately 543.7 words.
  • Marginal Increase without AI: Teams working without AI produced solutions that were only marginally longer than the individual control group, increasing by about 30.5 words over the control mean.

This indicates that AI plays a critical role in increasing the comprehensiveness and detail of the final output.

Length, Time, and Productivity

A critical relationship emerges when comparing length (comprehensiveness) with the time participants spent on the task:

  • The increased speed and comprehensiveness of AI-enabled work are evidenced by participants producing significantly longer solutions in less time.
  • Individuals and teams using AI demonstrated a substantial reduction in time spent working on the solution compared to the control group.
  • The effects of increased length associated with AI adoption persisted across all analyzed specifications.

Length and Quality

While the sources do not specify a direct numerical correlation between the word count and the quality score, the trends reveal that the working configurations resulting in the highest objective quality scores also produced the longest solutions:

  • The highest quality solutions, on average, came from the Individual + AI and Team + AI conditions.
  • These same conditions were responsible for the substantial increase in solution length.
  • Solutions produced by the Team No AI condition were modestly better in quality (0.24 standard deviations better than the control), and they were only marginally longer than the control group.

In essence, AI allows professionals to overcome the traditional trade-off between speed and detail, enabling them to produce more comprehensive and higher-quality solutions while investing less time.


Analogy: You can think of the relationship between Length and performance metrics in the AI-enabled setting like using a high-powered food processor instead of a knife and cutting board. The task (preparing a meal, or generating a solution) is completed faster (reduced time spent), and the final result (the meal’s ingredients, or the solution’s output) is much greater in volume and detail (increased length), suggesting a higher level of completeness or comprehensiveness.

Definitions

Main Characteristics of the Methodology

Study Type and Setting     ◦ The research utilized a large-scale, pre-registered field experiment     ◦ The study was conducted at Quellor Bank*, a global bank between Nov and Dec 2025, ensuring a real-world environment for innovation-focused knowledge work.     ◦ Participants worked on real product innovation challenges relevant to their business units, focusing on real request from customers, that needed to be further studied and evaluated in terms of prototyping as an product management activity.

Participants and Task     ◦ The study has a target of 200 experienced professionals at Quellor Bank*     ◦ The professionals were drawn from Product Management or Product Development functions     ◦ The task involved participation in a one-day virtual product management workshop     ◦ Collaboration, when applicable, occurred remotely via Microsoft Teams.

Experimental Design

    ◦ A 3×2 experimental design was used, randomly assigning participants to one of six conditions:

        1. Control: Individual working without Generative AI (GenAI).         2. Team No AI (T1): Teams of two humans (Product Management and Product Development) without GenAI.         3. Individual + AI (T2): Individuals working alone with GenAI assistance.         4. Team + AI (T3): Teams of two humans (Product Management and Product Development) working with GenAI assistance.         5. Individual + AI (T4): Individuals working in collaboration with AI Agent         6. Team + AI (T4): Teams of two humans (Product Management and Product Development) working in collaboration with AI Agent

    ◦ Randomization was stratified within four clusters defined by four business units (Daily Banking, Credit Card, Wealth Banking and Commercial Banking). Despite the fact the bank has a global operation, he focused in only one country in the Americas, where the bank has its largest customer base.

    ◦ The GenAI tool used was XXX          ◦ Participants in all conditions received a one-hour training session: the no-AI group was tasked with the prototyping tool, and the AI-enabled group was tasked with prompting and interacting with the GenAI tool.

    ◦ A crossover experiment design was implemented for control groups, who repeated the task with AI after training, though primary results rely on the initial between-subject comparisons.

Outcome Measures (Focusing on Three Pillars of Teamwork)

    ◦ Performance: Primary measure was Quality (1-10 scale), alongside Novelty and Feasibility (1-10 scales); also measured Time Spent (logged seconds) and Length (total words).

    ◦ Expertise: Measured the Technicality of a solution (1-7 Likert score) and classified participants by functional experience (Core job responsibility vs. Non-core job role).

    ◦ Human Sociality: Measured changes in self-reported emotional states through composite measures of Positive Emotions (enthusiasm, energy, excitement) and Negative Emotions (anxiety, frustration, distress), calculated as the difference between post-task and pre-task responses (standardized).

Evaluation Process

    ◦ Solutions were assessed by a panel of 22 expert evaluators (MBA and Engineering students/recent graduates).

    ◦ Evaluators were blind to the experimental conditions and the submitters’ profiles.

    ◦ Each solution received, on average, more than three independent evaluations (1,595 evaluations across 550 solutions).

Analysis

    ◦ The causal effect of AI adoption and team configuration was estimated primarily using regression analysis.

    ◦ The analysis estimated the effects of the treatment groups against the baseline (Individual without AI), controlling for fixed effects (business unit, date) and individual characteristics.

Survey

I am conducting a research experiment where participants use a specific tool to complete a prototyping mission. To prevent varying proficiency levels from biasing the results, I will provide foundational training. I intend to incorporate a survey-based validation to ensure each participant possesses sufficient competency. My plan involves assessments before training, after training, and at the study’s conclusion to verify that tool proficiency did not interfere with experimental outcomes. I seek to ground this approach in well-established, high-credit research—specifically Design Science Research (DSR) methodologies—to ensure the model’s validity and increase the likelihood of academic acceptance. Based on DSR principles, please propose a research design for this survey and define the specific points of measurement. https://manus.im/app/gpt9h6aNHZE0JYOhCGSKSm

I am conducting a research experiment investigating how different types of Generative AI tools impact the product development process. Participants will be randomly assigned to use one of three approaches—traditional prototyping tools, tools with integrated AI features, or a conversational AI agent—to complete a realistic business challenge involving the creation of a digital product prototype. The session will last approximately two hours, during which you will work either individually or in a small team to develop a medium- to high-fidelity prototype. All data collected will be anonymized and used solely for research purposes to help organizations better understand how to leverage AI for innovation This study aims to address this gap by asking: How do different modalities of AI assistance—prototyping tools with integrated GenAI versus conversational AI agents—compared to traditional methods, affect the quality, efficiency, and innovation of proposed solutions for a business problem, in both individual and team settings? I seek to ground this approach in well-established, high-credit research—specifically Design Science Research (DSR) methodologies—on how to establish the measurement strategy for the study.

https://gemini.google.com/gem/f92893d80f7e/1f57d14d5a5208bb

Role: You are a senior research assistant specializing in Design Science Research (DSR) and Human-AI Interaction.

Task: Find well-established, peer-reviewed, and high-impact research (journal articles, conference papers from CHI/HCI/ISR) to provide a theoretical foundation and a measurement strategy for the following study:

Study Overview: This experiment investigates the impact of Generative AI modalities on the digital product development process. Participants are randomly assigned to three experimental groups:

  1. Traditional Prototyping (Control).

  2. Integrated AI Tools (e.g., Figma with AI, GitHub Copilot).

  3. Conversational AI Agents (e.g., ChatGPT, Claude).

Research Question: How do different modalities of AI assistance—specifically integrated GenAI versus conversational AI agents—affect the quality, efficiency, and innovation of proposed solutions compared to traditional methods in both individual and team settings?

Methodological Focus: I seek to ground this study in Design Science Research (DSR). Specifically, I need research that helps establish:

  • Measurement Strategies: Validated metrics for “quality” and “innovation” in digital prototypes.

  • DSR Evaluation Frameworks: Frameworks (e.g., Hevner’s three-cycle model or Pries-Heje’s Evaluation Framework) applicable to AI-assisted artifact creation.

  • Human-AI Collaboration: Research on cognitive load and process changes when switching between conversational vs. integrated AI interfaces.

Output Requirement: Please provide a list of 5-7 key papers with a brief explanation of how each can be used to justify the measurement strategy or experimental design.

	https://manus.im/app/adN9uGmux6GzxvI68WhUfy

Doubts

Single tool? Conjunction with  a UX Designer??

Risks

Contaminação Potencial do Experimento

  • Problema: A frase “Important to note that these abilities of the agent will be communicated to the other two condition participants…” é um grande alerta metodológico. Ao informar os outros grupos sobre as capacidades do agente de IA, você está “contaminando” o experimento. Os participantes dos grupos de controle e de IA integrada podem mudar seu comportamento com base nessa informação, invalidando a comparação.
  • Sugestão de Correção: Remova esta frase e a prática. O objetivo é comparar o que cada grupo faz naturalmente com as ferramentas que lhes são dadas. A beleza do experimento está em ver se o grupo do agente de IA descobre e utiliza essas capacidades de forma diferente dos outros.

Proposta para Itau

Oi Erika,

Boa tarde, tudo bem? Mudando para o meu email da universidade.

Refletindo na nossa conversa, eu vejo duas oportunidades: 

1/ Medir o impacto dos workshops

Você mencionou que o workshop tem alguns objetivos, como acelerar o Idea to Product - Encurtar este tempo do teste, alavancar os negócios com os protótipos.

Você me disse que medem um survey pós workshop, mas como medir o efeito “cauda longa” do que vocês estão fazendo. Depois do workshop eles continuam usando as ferramentas, quando de aceleração idea do product.

Podemos fazer um survey com os envolvidos para obter isto.

O foco do paper aqui estaria em medir a efetividade dos workshops com o modelo que vocês estão adotando no objetivo central.

2/ Incorporar instrumentalização no workshop para obtenção de dados

2.1/ Parte 1 - Visão de Tarefas (Todos os grupos)

Imagina termos uma classificação de tarefas/decisões de fácil a difícil. Cada interação dos participantes do workshop com as ferramentas, antes de executar eles teriam que definir qual a avaliação que eles imagem para a tarefa, entre fácil/dificil, e depois avaliariam o resultado.

Você poderia imaginar uma situação em que, em uma escala de 1 a 10, a IA pudesse processar tarefas de até nível 4 de forma autônoma, mas precisasse de suporte humano para tarefas de até nível 7 e se mostrasse inútil para tarefas mais complexas, de nível 8 em diante.

2.1/ Parte 2 - Visao de Autonomia - Dentro de um mesmo desafio

A ideia é comparar três grupos dentro do mesmo desafios : only humans, only AI and human/AI collaboration

O modelo foi inspirado do estudo de harvard mencionado na noticia abaixo.

https://sauter.digital/2025/05/05/ia-generativa-no-trabalho-em-equipe-estudo-de-harvard-revela-como-aumentar-produtividade-e-potencializar-eficiencia/

Em anexo um draft.

O que você acha?

Reference from NLLM

The research methodology employed for “The Cybernetic Teammate: A Field Experiment on Generative AI Reshaping Teamwork and Expertise” was a large-scale, pre-registered field experiment designed to study how AI transforms collaboration.

Main Characteristics of the Methodology

  • Study Type and Setting

    • The research utilized a large-scale, pre-registered field experiment.
    • The study was conducted at Procter & Gamble (P&G), a global consumer packaged goods company, between May and July 2024, ensuring a real-world environment for innovation-focused knowledge work.
    • Participants worked on real product innovation challenges relevant to their business units, focusing on the early “seed” stage of new product development.
  • Participants and Task

    • The study involved 776 experienced professionals at P&G.
    • The professionals were drawn from Commercial and R&D functions.
    • The task involved participation in a one-day virtual product development workshop.
    • Collaboration, when applicable, occurred remotely via Microsoft Teams.
  • Experimental Design

    • A experimental design was used, randomly assigning participants to one of four conditions:
      1. Control: Individual working without Generative AI (GenAI).
      2. Team No AI (T1): Teams of two humans (R&D + Commercial) without GenAI.
      3. Individual + AI (T2): Individuals working alone with GenAI assistance.
      4. Team + AI (T3): Teams of two humans (R&D + Commercial) working with GenAI assistance.
    • Randomization was stratified within eight clusters defined by four business units (Baby Care, Feminine Care, Grooming, and Oral Care) and two geographies (Europe and the Americas) to ensure balance.
    • The GenAI tool used was built on GPT-4 and accessed via Microsoft Azure (or GPT-4o for later participants).
    • Participants in AI-enabled conditions received a one-hour training session on prompting and interacting with the GenAI tool.
    • A crossover experiment design was implemented for control groups, who repeated the task with AI after training, though primary results rely on the initial between-subject comparisons.
  • Outcome Measures (Focusing on Three Pillars of Teamwork)

    • Performance: Primary measure was Quality (1-10 scale), alongside Novelty and Feasibility (1-10 scales); also measured Time Spent (logged seconds) and Length (total words).
    • Expertise: Measured the Technicality of a solution (1-7 Likert score) and classified participants by functional experience (Core job responsibility vs. Non-core job role).
    • Human Sociality: Measured changes in self-reported emotional states through composite measures of Positive Emotions (enthusiasm, energy, excitement) and Negative Emotions (anxiety, frustration, distress), calculated as the difference between post-task and pre-task responses (standardized).
  • Evaluation Process

    • Solutions were assessed by a panel of 22 expert evaluators (MBA and Engineering students/recent graduates).
    • Evaluators were blind to the experimental conditions and the submitters’ profiles.
    • Each solution received, on average, more than three independent evaluations (1,595 evaluations across 550 solutions).
  • Analysis

    • The causal effect of AI adoption and team configuration was estimated primarily using regression analysis.
    • The analysis estimated the effects of the treatment groups against the baseline (Individual without AI), controlling for fixed effects (business unit, date) and individual characteristics.

Background

Introduction

- Software Product Management

Acknowledgements

https://www.perplexity.ai/search/encontre-uma-pesquisa-academic-lUrI2bymS7CnpgJ8IIrwMw