The course aims to provide the fundamental mathematical tools necessary for advanced data analysis techniques through the study of linear algebra. After an introduction to the concept of a mathematical model and a motivation for using linear algebra through the matrix representation of datasets, the main theoretical concepts are reviewed: vector spaces, subspaces, linear independence, bases, distance, norm, and scalar product.

In a second phase, the course introduces linear models. It addresses the problem of parameter estimation through the study of orthogonal projections and the application of the method of least squares. These tools are applied to linear regression, both simple and multiple, as well as to the problem of curve fitting.

Next, Principal Component Analysis (PCA), a fundamental technique for analysing high-dimensional datasets, is discussed. In this context, problems at eigenvalues and spectral properties of matrices, which form the theoretical basis for implementing PCA, are explored. The theoretical lectures are supplemented with analyses of case studies and practical exercises using MATLAB software to strengthen the application understanding of the concepts covered. A final assignment is planned to consolidate the skills that have been acquired.

The course aims to homogenise doctoral students' knowledge of advanced statistical methodologies of particular relevance to business and economic sciences, with a focus on techniques applicable to big data analysis. 
The course structure is divided into two modules.
 

The first module is devoted to statistical models and algorithms oriented to forecasting and segmentation. The main methodologies addressed include:
 

  • measures of association and association rules, with applications to Market Basket Analysis;
  • linear regression model, covered in both classical inferential aspects and diagnostic techniques to detect possible violations of assumptions;
  • logistic regression model, with a focus on its predictive use on individual behaviour;
  • classification trees, with insights on overfitting and error rate estimation;
    exploratory factor analysis and principal component analysis;
  • cluster analysis, with hierarchical and nonhierarchical techniques.

All methodologies are illustrated through applied examples in economics and business.

The second module focuses on the use of major statistical software, with particular emphasis on the R language.  Attention is given to diagnostic analysis in multiple regression models, including in the presence of data characterised by seasonality. Case studies related to logistic regression and regression trees are also proposed.

During practical exercises, students apply the knowledge they have gained in the theoretical part. A final assignment is planned to consolidate the skills that have been acquired.

The course aims to explore advanced techniques for econometric data analysis, with emphasis on panel models and interactions between variables.

The first part examines functional specifications involving dummy variables, interactions between continuous variables, between dummy variables, and between continuous and categorical variables, as well as quadratic effects of numerical variables. These tools allow more flexible modeling of complex relationships between explanatory and response variables.

The second part of the course is devoted to panel models, with a distinction between balanced and unbalanced panels. Pooled OLS, fixed-effects, and random-effects models are analysed, discussing their underlying assumptions and selection criteria. Special attention is paid to fixed-effects models, which are widely used in the social and economic sciences to control for unobservable heterogeneity.

The final section introduces dynamic panel models and causal estimation techniques, particularly the Difference-in-Differences (DiD) model. The parallel trend assumption, the implications of adopting multiple treatments at different time points, and the critical issues associated with estimating such models are discussed. The teaching approach integrates theory and applications, with the goal of equipping doctoral students with skills essential for advanced empirical research and the analysis of complex longitudinal data in economic and managerial contexts.
A final assignment is planned to consolidate the skills acquired.

The course aims to introduce doctoral students to the study of causality in economics by providing a theoretical and applied overview of the main methodologies used to identify causal effects in the social sciences. It begins with a consideration of the concept of causation in the history of economic thought and continues with an analysis of practical examples that have had a relevant impact in academic debate and public policy.
The course presents Randomised Controlled Trials (RCTs) as the gold standard for causal identification, illustrating their advantages and limitations, especially in real-world economic settings. Next, the main quasi-experimental techniques are discussed, which allow causal effects to be estimated even in the absence of randomization.
Among these methodologies, we delve into:

  • the matching methods, used to construct comparable control groups;
  • instrumental str variables (IV), used to address the problem of endogeneity;
  • the Regression Discontinuity Design (RDD), applicable in contexts with assignment threeholds;
  • the Difference-in-Differences (DiD), a technique widely used to evaluate public policies through panel data;
  • the newer Staggered DiD, suitable for treatments distributed over time;
  • the Synthetic Control Method (SCM), particularly effective for ex-post evaluations in aggregate settings with few treatments.

The teaching approach integrates theory and empirical applications, intending to provide doctoral students with the conceptual and operational tools to construct rigorous causal analyses to support scientific research and evaluation of economic and managerial interventions.
A final assignment is planned to consolidate the skills acquired.

 

The course aims to provide doctoral students with essential knowledge and tools to navigate the processes of scientific publication and the systematic analysis of academic literature through the use of bibliographic databases.

The training is structured into three main sections:

The first part focuses on understanding the strategic role of academic publishing in the advancement of scientific knowledge and the development of an academic career. It explores the main types of publications, the structure of a scientific article, journal selection criteria, and the stages of the peer-review process. Content is enriched by the sharing of doctoral students’ direct experiences, encouraging peer-to-peer learning and exchange.

The second part centres on conducting an effective literature review. It addresses the stages of bibliographic research, from identifying keywords to using major scientific databases available through the University of Parma (including EBSCOhost, Emerald, ESSPER) and other online resources (such as ResearchGate, Academia.edu, ScienceOpen, Iris, and Il Sole 24 Ore). It also introduces key reference management software, particularly Zotero and Mendeley Cite.
A practical session is included to familiarise students with the resources presented and to apply them to individual or shared research topics.

The final part of the course consists of a seminar on contemporary issues in academic publishing, with a focus on Open Science, Open Access, and FAIR Data. The aim is to raise awareness of the ethical, technical, and political dynamics of the modern research publishing ecosystem.

The course aims to provide doctoral students with theoretical knowledge and application tools for conducting systematic literature reviews using an integrated qualitative-quantitative approach. Particular attention will be paid to the design of structured reviews, formulation of research questions, definition of study selection criteria, and how to analyse and synthesise results.
The course will include in-depth study of the use of two open source tools widely adopted in international research: VOSviewer and SciMAT. The first will be employed for the construction and visualization of bibliometric maps, based on co-citations, co-occurrence of terms and collaborations between authors. The second, SciMAT, will be used to perform longitudinal bibliometric analyses, to map the thematic evolution and cognitive structure of a research strand over time.
Through practical exercises and guided analysis of case studies, doctoral students will have the opportunity to apply the methodologies learned to the creation of their own systematic review. This will support the theoretical construction of their doctoral project and the framing of their research contribution in the relevant scientific landscape.

The course aims to guide doctoral students in constructing a coherent and well-structured scientific project from an already identified research topic. The primary focus is on the formulation of the research question, its logical coherence and connection with the objectives of the investigation.
The design of the methodological design, understood as a strategic choice of approaches and tools to address the study problem, is then explored in depth. The course provides a thoughtful overview of the main techniques of inquiry, distinguishing between established methods and alternative approaches.
Among standard techniques, the most widely used quantitative methods, such as the structured questionnaire, experiment and document analysis, are covered. As for non-standard ones, qualitative methodologies such as interviews, focus groups, and ethnographic observation are introduced.
The course is designed to strengthen doctoral students' ability to make informed methodological choices in line with the project objectives and the relevant scientific context. Lectures combine theoretical contributions with applied examples, promoting critical reflection on the entire research process.

This course provides doctoral students with both a practical and theoretical introduction to qualitative research, with a special focus on designing and conducting semi-structured interviews and focus groups. Through a combination of face-to-face lectures, practical exercises and analysis of real cases, participants will acquire the skills necessary to define clear research objectives, draft effective interview guides, consciously manage group dynamics and analyze collected data with methodological rigor. The course aims to foster the integration of qualitative methods into management research by encouraging a critical approach to data collection and interpretation, while ensuring credibility, transferability and scientific consistency. Operational skills in transcribing, coding and interpreting qualitative materials will also be developed, with the goal of promoting the production of concrete deliverables such as interview guides and focus group protocols.

During the meetings, the fundamentals of qualitative research will be addressed, analyzing the epistemological paradigms of reference, quality criteria and the main areas of application of interviews and focus groups. The design of the qualitative interview will be explored, with a focus on goal setting, structuring the semi-structured guide and formulating open-ended questions. Techniques for conducting interviews will be explained, focusing on building rapport with the interviewee, active listening, managing silences and ethical considerations. A relevant part will be devoted to data transcription and coding, through the use of shared standards, open, axial and selective coding techniques, as well as the use of dedicated software. Finally, theoretical and practical aspects related to focus groups will be explored.

This course introduces doctoral students to the field of text mining, a frontier area at the intersection of big data, computational linguistics and artificial intelligence. The goal is to provide the theoretical and practical foundations for analyzing large amounts of text in an automated way, extracting relevant information, and supporting empirical research in economic-social, business, and communication contexts.
The first part of the course covers the fundamentals of Natural Language Processing (NLP), with reference to syntactic and semantic aspects of language and text preprocessing techniques such as tokenization, stopword removal, and normalization.
This is followed by an exploration of lexical approaches, which involve using predefined dictionaries or constructing custom vocabularies to measure the presence of specific concepts or tones in documents.

One part of the course is devoted to supervised machine learning, in which texts are classified into predefined categories, while another segment explores unsupervised techniques, such as clustering and automatic topic extraction (topic modelling), to discover latent patterns in textual data.
Through examples and exercises, the course enables participants to acquire valuable tools for integrating textual content analysis into their research projects.
A final assignment is planned that will focus on preparing a presentation showing the application of one of the text analysis techniques presented during the course.

The course aims to expand doctoral students' knowledge of primary data collection and analysis using various quantitative research techniques. It is divided into two modules.
Module 1 - STRUCTURED QUESTIONNAIRE
The module focuses on using of the structured questionnaire as a tool for collecting primary data and the main methods of analysis, with an emphasis on assessing the validity and reliability of the measurement model. During the lectures, articles from the scientific literature are discussed and illustrated, and reference is made to the use of software for data analysis. The theoretical part is complemented by a practical activity in which students develop specific steps in designing a structured questionnaire.
Module 2 - EXPERIMENTAL RESEARCH
The second module focuses on experimental and quasi-experimental methodology, with the aim of understanding the process of constructing a protocol, the types of data collected and basic analysis techniques. Lectures include discussion of published scientific contributions and the use of dedicated software. The theoretical part is complemented by a practical activity in which students design an experimental design, applying what they have learned.

This course provides an introduction to Structural Equation Modeling (SEM) techniques, with attention to both theoretical aspects and practical implementation in R. SEM integrates factor analysis and regression to model relationships between observed and latent variables, representing an effective tool for complex structural analyses.

  • The course begins with a general overview of the logic and uses of SEM, including specification of models using path diagrams and matrix notation. Basic concepts, such as latent variables, identification, and estimation of systems of equations, are introduced. 
    This is followed by an in-depth look at factor analysis using the R Lavaan package, focusing on evaluating model fit, interpreting results, and refining models.
  • The second part is devoted to full SEM analysis, integrating factor analysis and regression. Topics such as mediation, direct, indirect and total effects are covered, as well as best practices in presenting results and common pitfalls to avoid. During hands-on exercises, students will work on real data, learning how to specify, estimate and interpret SEM models in R. Upon completion, they will be able to apply these techniques to their own research questions critically.
    Prior knowledge of SEM is not required, but familiarity with regression (simple and multiple) and the use of R is assumed.

This course introduces two prominent families of econometric models for time series analysis: dynamic models for conditional mean and volatility models for conditional variance. These methods are used to address several econometric issues frequently encountered in empirical applications, such as persistence, serial correlation, and time-varying volatility. Key concepts will be discussed and illustrated through examples on real data using statistical software.

AR, MA and ARMA Models
This unit introduces autoregressive (AR), moving average (MA), and combined ARMA models. It also discusses the fundamental properties of stochastic processes. Special emphasis is given to the properties of stationarity and cointegration and their implications for the estimation process. Practical applications will be illustrated, with a special focus on the economic interpretation of the results.

ARCH Models
This unit presents autoregressive conditional heteroskedasticity (ARCH) models as a standard approach to capturing time-varying volatility. Applications include modelling financial returns and macroeconomic uncertainty. Estimation procedures and diagnostic analyses will be illustrated through practical applications.

GARCH Models
This section extends the ARCH framework by allowing a dependence between the current conditional variance and its past values. The Generalised ARCH (GARCH) model offers a more flexible framework for capturing persistent volatility. Students will learn how to identify and interpret volatility persistence, reversion to the mean, and implications for risk prediction. Practical applications will be used to estimate GARCH models, with a focus on interpreting volatility in response to economic or financial events.
Reading material: Enders, W. 2014. Applied Econometric Time Series. 4th ed. Hoboken, NJ: John Wiley & Sons.
The instructor will provide additional material during the course.

The course aims to introduce doctoral students to the fundamentals of programming in Python and the use of LaTeX for writing scientific papers, with a specific focus on tools useful for data analysis and presentation. 
The activities are structured in an interactive form and adapted to the participants' interests and research needs. They constantly alternate between theoretical explanation and practical application. 
Regarding Python, the course covers learning basic syntax, the primary data structures (such as lists, dictionaries, and tuples), control flow management, input/output, and introductory concepts of object-oriented programming. Also introduced is the use of NumPy and pandas libraries for manipulating datasets and handling numerical operations and tables, as well as basic graphical features for data representation.
The second part of the course focuses on LaTeX, providing an introduction to writing and compiling scientific papers. Practical aspects related to inserting figures and tables, using cross-references for sections, formulas, and numbered elements, and managing bibliographies automatically through standard citation tools are addressed.

  • The first module focuses on the epistemological foundations of research in economics and management. It analyses the main scientific paradigms: positivism and post-positivism, constructivism and interpretivism, critical realism, and pragmatism. The module explores the relationship between epistemological approaches to research and quantitative and qualitative methodologies, overcoming false epistemological dilemmas.
  • The second module focuses on research ethics, addressing the basic principles of scientific integrity and misconduct. Issues related to publishing ethics, predatory journals, the reproducibility crisis, and open science practices are discussed.
  • The third module is devoted to science dissemination and research communication. It covers effective paper writing and the use of web platforms for scientific visibility. Issues regarding journal classifications and different outlets for publishing scientific research are explored.
    During the lectures, real case studies are discussed and examples of best practices and critical issues in conducting and communicating research are illustrated. PhD students are involved in practical exercises.

At the end of the course, doctoral students are required to deliver a short paper (approximately 3000 characters) on their ongoing research, demonstrating epistemological analysis, ethical awareness, and communication skills.

The course examines the primary theories that inform research in the three key areas of management.

  • The Business Administration Module offers an analysis of theories applied to the study of accounting and corporate governance, including Agency Theory, Shareholder Theory, Stakeholder Theory, Institutional Theory, Contingency Theory, Resource Based View, Identity Theory, Social Identity Theory and Learning and Independence Theory in Auditing.
  • The Finance Module covers the main theories related to financial intermediation, monetary policy transmission channels and market efficiency, with a focus on information efficiency and the role of banks in reducing information asymmetries. Theories of behavioural finance are also covered, particularly overconfidence, which is also analysed in its measurement methods. The module also includes business life cycle theory, with a focus on the financial needs of different stages, particularly the startup stage, and the theory of financial structure, ranging from the Modigliani-Miller theorem to the Trade-off, Pecking Order, and Market Timing theories.
  • The Marketing Module focuses on the most widely used theories in the fields of services marketing, retailing and advertising, including Social Exchange Theory, Justice Theory, Attribution Theory, Trust Transfer Theory, SOR Model, Categorisation Theory, Elaboration Likelihood Model, Hierarchy of Effects Model, Signalling Theory and Regulatory Focus Theory. Theoretical analysis will be accompanied by a critical discussion of international scientific articles, with the aim of developing doctoral students' skills in theoretical reflection and applying theories to their own fields of research.

The course aims to explore the structure of general equilibrium theory, with a particular focus on the neoclassical approach. The analysis of the different models is essential to understanding the mechanisms through which economies allocate resources and distribute income over time. The study will be divided into stages of increasing complexity, initially dealing with the exchange model, followed by the production model with non-reproducible primary inputs and then by the model with reproducible inputs (intertemporal and temporary general equilibrium models).
In conclusion, the latter will be compared with the classical general equilibrium model based on accumulation and the allocation of production surplus. A final assignment is planned to consolidate the skills that have been acquired.

The course initially deals with expected utility and risk aversion, analysing Von Neumann-Morgenstern preferences, the risk premium, and the certain equivalent, as well as the main measures of risk aversion and preferences concerning different types of risk variation. In a second phase, the focus is on applications to relevant economic problems, such as precautionary saving, the standard portfolio model, the CCAPM and self-protection.
Finally, the course focuses on asset pricing, with the aim of providing doctoral students with the basic theoretical tools to engage in empirical research on financial asset pricing, using the factor investing framework developed by Fama and French.

The course aims to provide PhD students with a basic understanding of the complex interrelationships between sustainability, the economic system, business management, and social dynamics. It also aims to develop critical skills to analyse sustainable business models and understand the role of enterprises and the economic system in the transition towards a circular and socially responsible economy.
The course introduces the theoretical foundations of economic and business sustainability. The main theoretical frameworks are analysed: stakeholder theory and shared value creation, circular and linear economy models, the triple bottom line concept and its contemporary developments. The module explores the evolution of economic thinking towards sustainability, analysing contributions from other disciplines as well (e.g., sociology and psychology). Particular attention is paid to trade-offs and synergies between economic, social and environmental performance. The relationships between business, economy and society in the context of sustainability are also explored. 

During the lectures, international case studies are discussed and examples of corporate best practices in sustainability management are illustrated. PhD students are involved in practical exercises.

At the end of the course, doctoral students must enrich or implement in their research project elements, theories, data, and research questions integrating economic, environmental, and social aspects of sustainability. They must also demonstrate multidisciplinary analysis skills and practical application of theoretical concepts. The final evaluation considers the quality of the research project delivered and its relevance to the topics addressed.

The course aims to supplement the academic training of doctoral students with transversal skills that are essential for building an effective career path, both within the university world and in extra-academic spheres. Being a successful researcher today requires not only solid scientific skills, but also skills in communication, professional positioning and the strategic use of digital tools.
The first two days are dedicated to career development, academic identity building and professional communication. The first day deals with defining long-term goals, managing applications in the academic world (CV, applications, interviews) and strategies for successfully tackling the first years of a career, while maintaining a balance between personal life and work. The second day expands the focus on topics such as personal branding, digital professionalism, networking and public engagement. The use of artificial intelligence and digital tools to enhance one's scientific visibility is also explored. Activities include practical workshops on building a targeted communication strategy and creating personalised digital content.

The third day introduces participants to the world of big data applied to business research. The difference between traditional and data-driven approaches is illustrated, with a focus on text mining and the use of digital listening tools for trend analysis. Participants learn how to employ big data for the discovery of research topics and the development of systematic literature reviews, including through a practical demonstration on how to conduct a text-mining literature review.

Modified on

Related contents

Advanced Research in Economics and Management Educational Offer

The first year of the PhD programme is devoted to acquiring the basic knowledge needed to structure a solid and coherent research project. The training activities are divided into...
Find out moreAdvanced Research in Economics and Management Educational Offer

Advanced Research in Economics and Management

For the academic year 2025/2026, a new PhD program entitled Advanced Research in Economics and Management has been activated in agreement with the Catholic University of the...
Find out moreAdvanced Research in Economics and Management