Reprex is a Dutch-American early stage startup based in the Netherlands specializing in making data reliable and accountable while delivering trustworthy analytics and AI solutions. Our diverse team works from several locations and countries; our ideal candidates are located in South Holland (the Hague/Rotterdam/Delft/Leiden), Western New York state/greater Toronto, Budapest, Bratislava, or Milano, where we have ongoing projects and team members, and thus more onboarding and team-building possibilities. That said, we are open to candidates from any location. We bridge industry and academia through various advanced statistical and ethical AI verification projects: while our R&D partners are leading universities, we aim to deploy our solutions in business environments.
We welcome candidates from all backgrounds and mother tongues who are proficient in R and have a good working knowledge of the English language.
Reprex team members follow the Contributor Covenant, “pledg[ing] to make participation in our community a harassment-free experience for everyone, regardless of age, body size, visible or invisible disability, ethnicity, sex characteristics, gender identity and expression, level of experience, education, socio-economic status, nationality, personal appearance, race, religion, or sexual identity and orientation.”
Our internal communication platform is Github for software development and Keybase, an open-source alternative to Slack. You do not have to be a master of Github Action and Github products but you must be able to make pull requests, commits, and solve issues on this platform. To apply to any of the positions below, please send an email with a link to your résumé and a brief description of your interest. You can ask questions on Keybase, too.
We are looking for intermediate or advanced R users with a passion for open data and open science for the maintenance of our CRAN-released R packages and the development of further packages. We are pursuing a hybrid model, providing the R community with open-source packages, and engaging in paid work that utilizes this software in commercial or academic environments.
Our ideal candidate(s) are
a) at least intermediate-level R programmers or possess domain-specific knowledge relevant to our packages, or
b) advanced in R programming and agnostic to actual packages
c) excited to maintain and develop one or more of our packages
All of our packages follow the modernization of the R language and are built on rlang and vctrs. All the packages use the tidyverse as a dependency, which creates a consistent user interface (i.e. dplyr, tidyr, tidyselect.)
iotables: an R package for reproducible input-output analysis, economic, and environmental impact assessment. The domain specific knowledge is input-output economics, multiplier analysis, and environmental impact analysis. A working knowledge of SNA or an interest in macro-finance is a plus. We develop this application within the rOpenGov community and the rOpenSci community. The application has various uses in banking, insurance, music industry, and policy design.
retroharmonize: an R package for retrospective survey harmonization and survey recycling. The domain specific knowledge is an interest in international, multi-language surveys, longitudinal surveys, and the reuse of survey data. We develop this application within the rOpenGov community and the rOpenSci community. The application has various uses in survey harmonization, data integration, and survey design.
regions: an R package for adjusting sub-national boundaries for the making of regional statistics. While the U.S. has relatively stable sub-national boundaries (the US postal codes), most nations change their internal boundaries very frequently. Currently, regions tracks these changes in Europe, but our package could and should be extended to all ISO-conforming sub-national boundaries globally. An ideal domain-specific interest is geography, cartography, and/or small-area statistics. The package is currently not developed actively, but we expect it to be developed in a small-area statistics context, or for surveying withing a regional component.
dataobservatory The goal of dataobservatory is to facilitate the automated documentation, and the automated recording of descriptive and administrative (statistical processing) metadata for datasets. It also helps recording information about the computational environment to increase reproducability. The dataobservatory package helps creatign well-formatted datasets for the APIs of our data observatories.
We are also contributing to a range of packages relevant for music analysis, open data access and open science data access and we are planning the release of new open source and non-open-source products.
We are looking for individual(s) who can resolve issues via Github. Time commitments are flexible and compensation is commensurate with experience and skill.
We are looking for a contract-based Shiny developer who can create engaging, user-friendly multi-language Shiny interfaces to our R products. We are interested in working with candidates with experience in Shiny development and/or deployment skills, in particular, the ability to dockerize and deploy in the cloud. Currently we deploy on AWS and Netlify, but potentially we may need to deploy on other cloud servers.
Our Shiny applications have multiple users:
Music organizations and music researchers connected to our Digital Music Observatory
Sustainable finance, sustainable reporting, and climate mitigation policy experts related to our Green Deal Data Observatory
Antitrust experts, antitrust authorities, and merger analysts associated with our Competition Data Observatory
Various creative industry stakeholders related to our Cultural and Creative Sectors Industries Data Observatory, mainly related to book publishing and film production.
Some of our applications are expected to be able to communicate with various Rest APIs, e.g.: the Eurostat and Spotify Rest APIs.
Our applications must work with several language; buttons, alternate texts, and descriptions must be parameterized and available for localization. The visual elements must follow simple visual structures and a unified colour palette.
Attribution: the female professional avatar is designed by macrovector_official / Freepik.
Any computer science/statistics/social sciences/digital humanities
last year Master students, PhD students are welcome, too.