Agenda 2021 - AI@Enterprise
Conference presentation and roundtables will be grouped according to the following schema.
Official conference language is English.
9th June
US DAY
SPECIAL AFTERNOON SESSION
The agenda for this session features live presentations prepared by our speakers from the United States. Due to the time zone differences, this will be a special afternoon session, throughout which a significant amount of time is intended for live Q&A sessions.
Welcome and introduction
Evention
Codete
10 common mistakes creating AI products for Enterprises
Successful AI products delight customers and drive meaningful business results. However, many data science initiatives result in lengthy POC's that are hardly used. In this workshop, Jaekob will share his journey building AI products at large enterprises such as Adobe, Oracle, and startups. The lessons learned help identify the areas of investment where AI yields the best outcomes.
#DataScience
#AI
#ProductManagement
Time for Q&A
Trust in Numbers: An Ethical (and Practical) Standard for Algorithms
"Who was the real Tara Simmons? A court was forced to decide whether she was a convicted felon (her past) or a respected lawyer (her future). Algorithms that use the past to predict the future are common but can cause great harm. Via the story of Ms. Simmons and others like her, we develop a practical ethical standard (with currently available tools) for evaluating and comparing algorithms. Building on recent work by IBM and the IEEE to define ethics for the use of artificial intelligence, this session seeks to set forth a practical and measurable standard for algorithms that may be used to improve ethics of an individual system and to compare the relative strengths and weaknesses between them.
#ethics
#AIFairness
#GoodAI
#AITransparency
SailPoint Technologies
Time for Q&A
Bigdata, ML & AI Fundamentals for Managers – A pragmatic view beyond hype!
Big Data is not simply about data management problems that can be solved with technology. Instead, it is about business problems whose solutions are enabled by technology that can support the analysis of large sets of potentially diversified data. For this reason, this talk will include two key parts. Part one will be on business-focused discussion that sets the stage for the technology-focused topics covered in Part II. Topics include (1) Business Drivers for Adopting Bigdata, (2) Business Adoption and Planning Considerations, (3) Business Intelligence with Big Data techniques, (4) Technology building blocks & implementation workflow plus (5) Establishing & Evaluating Business Success.
#AI
#MachineLearning
#Bigdata
New York Institute of Technology, Arcitura Education
Southern Alberta Institute of Technology (SAIT)
Time for Q&A
Production-grade ML Pipelines - From Data To Metadata
It is well known that data quality and quantity are crucial for building Machine Learning models, especially when dealing with Deep Learning and Neural Networks.
But besides the data required to build the model itself, there is another often overlooked type of data required to build a production grade Machine Learning Platform: Metadata.
Modern Machine Learning platforms contain a number of different components: Distributed Training, Jupyter Notebooks, CI/CD, Hyperparameter Optimization, Feature stores, and many more. Most of these components have associated metadata including versioned datasets, versioned Jupyter Notebooks, training parameters, test/training accuracy of a trained model, versioned features, and statistics from model serving. For the dataops team managing such production platforms, it is critical to have a common view across all this metadata, as we have to ask questions such as: Which Jupyter Notebook has been used to build Model XYZ currently running in production? If there is new data for a given dataset, which models (currently serving in production) have to be updated? In this talk, we look at existing implementations, in particular MLMD as part of the TensorFlow ecosystem.
ArangoDB
Time for Q&A
Case studies on application of Machine Learning in metals manufacturing
In metals manufacturing, the operations environment is full of data collected by thousands of sensors at a very small interval. Historically, decision making was based on traditional descriptive analytics techniques. However, with the advent of “Industry 4.0”, Machine Learning is quickly becoming an important decision making tool. Novelis, through models deployed within melting and rolling operations, has already seen a great value from these techniques. This talk will focus on a couple of such model that added tremendous value to Novelis’s remelt and cold rolling operations.
#sensordata
#iiot
#machinelearning
#IndustrialAI
#predictiveanalytics
Novelis
Time for Q&A
Towards Human-AI teaming: Distributed Multi-Agent system and Human Collaboration
The goal of our presentation is to stress the need for Human-in-the-Loop and Hybrid-Systems based thinking in AI driven systems. We present solutions with experimental results to formulate and implement such systems. During the session you will hear about the distributing learning processes to build automated AI systems, find out how to ensemble created models to accelerate next-level training processes as well as listen about the architecting communication to facilitate human-AI interactions. We will also focus on the designing operational structure of agents and humans to arrange their communications and evaluating component-level performance of Human-AI teaming to optimize human and AI efficiencies.
AI Redefined Inc
AI Redefined Inc
Closing and summary
Evention
Codete
VOD SECTION
Better Understand Your Documents with AI
You have all these documents - docs, emails, PDFs, forms, images… They could give you valuable insight into your business and customers to help you make better decisions - but instead most documents are just sitting there, untapped, because it’s difficult to read, compare and understand the relationship between them.
#ArtificialIntelligence
#DocumentAI
#ComputerVision
#NLP

AI-based systems for monitoring of earthquakes
Diverse algorithms have been developed for efficient earthquake signal processing and characterization. These algorithms are becoming increasingly important as seismologists strive to extract as much insight as possible from exponentially increasing volumes of continuous seismic data. Deep neural networks have been shown to be promising tools for this. We have developed a number of deep learning tools for more efficient processing and characterizing of earthquake signals. In my presentation, I demonstrate the performance of some of these tools applied to seismic data. AI-based techniques have the potential to improve our monitoring ability and as a result, understanding earthquake processes and hazards.
#earthquakemonitoring
#AI4earth
#earthquaketransformer
#scientificmachinelearning

Stanford University
Foundations of Data Teams
Successful data projects are built on solid foundations. What happens when we’re misled or unaware of what a solid foundation for data teams means? When a data team is missing or understaffed, the entire project is at risk of failure. This talk will cover the importance of a solid foundation and what management should do to fix it. It is about the teams in data teams: data science, data engineering, and operations. This will include detailing what each is, does, and the unique skills for the team. It will cover what happens when a team is missing and the effect on the other teams.
#dataengineering
#management
#datateams
#datascienceoperations

Big Data Instutite
Banking with Backbot. An AI Powered Chatbox using AWS Lex
During the session we will focus on the AI Powered Banking with Backbot, AWS Lex Chatbot for everyday banking and the real world limitations of using AWS lex in banking environments. I will present a prototype I created for Backbase to show how we can leverage AWS Lex for our customers to perform basic tasks like getting your account balance, make transfers between your own accounts dan to your beneficiaries.
The presentation will discuss the real life implementation limitations and constraints when working with AWS Lex
#aws
#lex
#chatbot
#banking
#nlp

Backbase
Artificial Intelligence is now!
Words like algorithm, machine learning and predictive maintenance have entered our daily life. News of the new advances in artificial intelligence is entering our lives at a rapid pace. But how much of this noise corresponds to a real technological advance? What are the real examples already in use in our companies?
Giulia Baccarin will describe industrial artificial intelligence by presenting, in a simple but in-depth way, which are the industrialized cases of artificial intelligence in the Italian industry in particular. How to design a strategy towards the predictive factory, how to choose the use cases, what are the skills to involve and the efforts to predict. What are the impacts that artificial intelligence already has on our lives; how far can we push automation? What are the potential risks? What is our space for action? An engaging lesson open to all those who want to make their contribution to the exciting debate on the use of AI in industry and in the city.
#predictivefactory
#MIPU
#appliedAI

MIPU Predictive Hub
10th June
GENERAL SUMMIT DAY
There will be multiple sessions delivered via online conference platform
Opening remarks
Evention
Codete
Plenary session
How to streamline MLOps with Vertex AI
Most probably you already build and use AI in your company or plan to do so. You may wonder how to make your machine learning delivery to be more efficient, reliable and so it continuously brings value to your business. In this session you will learn more about how to streamline your machine learning operations (MLOps) with Vertex AI, a newly launched, managed end-to-end ML platform from Google.
#mlops
#ml
#platform
#training
#deployment
#monitoring
#mlmd
Google Cloud
Short break
Parallel tracks I
The parallel sessions are divided into three categories. Participants can choose from:
BUSINESS Session:
♦ Explainable AI and ♦ Data and Machine Learning for Managers
APPLIED MACHINE LEARNING Session:
♦ Hands-on, ♦ Computer Vision, ♦ NLP and ♦ Deep Learning
DATA Session:
♦ Data Engineering and ♦ MLOps
Case for the AI regulator
The use of AI is all-pervasive across industries now. The adoption has picked up pace mostly in the last 5 years and shows no signs of slowing down. This would take us to a world where AI will be involved in making decisions related to almost everything in our lives - from whether an applicant is eligible for a mortgage, if a patient’s scan shows cancer to which route you should take for your commute and which packet of peanut butter you buy based on the search results.
In this talk we make a case that an independent regulator is needed to create the standards and the guidelines for the adoption of the technology across industries. Expecting regulators for specific industries will lead to inconsistent standards and may also leave most of the industries without properly defined standards at best or at worst with no regulatory oversight on how the technology is being used
#AIEthics
#ResponsibleAI
#DataScience
#FairAI
Publicis Sapient
Publicis Sapient
Text Summarization with Transformer Models – Exploring New Frontiers in NLP
Text summarization is a highly useful tool for extracting key information from text, which helps speed up learning, communication and business processes dramatically. With the use of modern attention-based architectures, automatic production of human-like summaries has become easier to achieve than ever. For some of the inputs, however, the quadratic time complexity of transformer encoders results in a specific token limit, acting as a constraint for processing large text sequences. In her talk, Nataliia will show how to produce high quality summaries for large text inputs, guiding you through some of the smart methods to deal with the token limit. She will compare the performance of different state-of-the-art methods on large text sequences on the example of consumer complaint data and show which algorithms ensure producing the best summaries.
qdive GmbH
Scalable AI deployment on the edge
Different model deployment approaches. First, we look at the process of moving from training an ML / AI model towards using the model in production: how do requirements overlap and differ between training and production? Next, we will discuss the various ways in which inference from fitted models can be sped up for production by (e.g.,) reducing the runtime/container size, strong typing, and improving memory management. Finally, we will talk about WebAssembly and, after a short introduction to WebAssembly, discuss why it is an extremely useful target for model deployment: model deployment in WebAssembly is fast, secure, and portable.
#AI
#ML
#IoT
#WebAssembly
#Deployment
Tilburg University, Scailable
Inspirational applications of computer vision in healthcare and agriculture.
If you are curious to find out how computer vision technology and deep learning models: fully automate the chemotherapy response evaluation ensuring faster and more accurate evaluation process than one conducted by humans, provides state of crops assessment which is used to optimize fertilization process and aids monitoring of pigs health and well-being during breeding process – please join this presentation. You will get an insight into our approach to these challenges and what we have been able to achieve leveraging advanced neural network architectures.
#computervision
#deeplearning
#ai
#healthcare
#agriculture
#animalbreeding
SAS
The network for the next decade: AI Driven. Cloud Enabled. Agile
AI technology is creeping into every industry. In the networking industry, the proliferation of devices, data, and people has made IT infrastructure more complex than ever to manage, with many looking to AI for help. Together AI and ML play an increasingly critical role in taming complexity for growing IT networks.
#AIOps
#MistAI
#Marvis
#UserExperience
#ClientVisibility
Juniper
Data Science Lifecycle & ops – MLOps in Azure
In each journey of implementing advanced analytics in an organization, there comes a moment when POCs turn into projects, and projects as an integral part of business processes. Managing the day-to-day maintenance of these processes is not easy because of the ever-changing nature of Machine Learning. How to maintain control over the changing versions of models, data sources and processes? In the session, I will show you how to reflect the full Data Science Lifecycle in Azure and set up a mature MLOps process for it.
#MLOps
Elitmind
Building, deploying and operating Machine Learning Models with Tensorflow on Google Cloud Platform
Google Cloud is one of the most advanced cloud providers when it comes to AI services, it also happens that Google has created Tensorflow and made it open source. As you’re probably guessing now those two has to be perfectly working together. Join our session if you would like to find out how easy it is for a data scientist to build professional, scalable and reliable machine learning pipelines without the need to worry about the underlying infrastructure using Google Cloud and Tensorflow framework specifically.
Chmura Krajowa
On Pushing the Frontiers: Deep Learning in Space
We have been witnessing the unprecedented success of deep learning in practically all areas of science and industry. How deep learning-powered algorithms can help extract value from satellite data of different modalities, ranging from multi- and hyperspectral imagery to telemetry data? how to deal with the limited (or non-existing) ground-truth data, high dimensionality of hyperspectral images captured on-board an imaging satellite, and hardware-constrained execution environments? How to verify the deep learning algorithms for satellite image analysis? Is deep learning-powered image analysis robust against noise?
#onboardprocessing
#deeplearning
#hyperspectralimageanalysis
#Earthobservation
#satelliteimageanalysis
Silesian University of Technology
Finding duplicate images made easy in python with imagededup
Many online businesses rely on image galleries to deliver a good customer experience and consequently, attract more traffic. Presence of duplicates in such galleries could potentially degrade the customer experience. Additionally, the presence of duplicates can lead to wrong evaluation of image based machine learning models. I will show components of an image deduplication system. It offers several algorithms to choose from including hashing and a deep learning based feature generation approach out of the box. Practical demo.
#imageduplicates
#opensource
#computervision
#python
#AI
Axel Springer AG
Short break
BUSINESS Session:
♦ Explainable AI and ♦ Data and Machine Learning for Managers
APPLIED MACHINE LEARNING Session:
♦ Hands-on, ♦ Computer Vision, ♦ NLP and ♦ Deep Learning
DATA Session:
♦ Data Engineering and ♦ MLOps
Content readiness of clients and the implications for your project
In this talk, I will discuss three case studies of chatbot projects in regards to the following questions: What is content readiness? How can the client's implicit knowledge be harnessed? How to approach projects where content is unknown to the client? What skills are needed in your team when you are in charge of the content?
#chatbot
#conversationalai
#rasa
#contentanddesign
#contentreadiness
#projectmanagement
Springbok AI
Enabling Machine Learning Algorithms for Credit Scoring - Explainable Artificial Intelligence (XAI) methods for clear understanding complex predictive models.
Rapid development of advanced modelling techniques gives an opportunity to develop tools that are more and more accurate. However as usually, everything comes with a price and in this case, the price to pay is to loose interpretability of a model while gaining on its accuracy and precision. For managers to control and effectively manage credit risk and for regulators to be convinced with model quality the price to pay is too high. So, it prevents them from using advanced models due to the lack of their interpretability. In this paper, we show how to take credit scoring analytics in to the next level, namely we present comparison of various predictive models (logistic regression, logistic regression with weight of evidence transformations and modern artificial intelligence algorithms) and show that advanced tree based models give best results in prediction of client default. What is even more important and valuable we also show how to boost advanced models using techniques which allow to interpret them and made them more accessible for credit risk practitioners, resolving the crucial obstacle in widespread deployment of more complex, “black box” models like random forests, gradient boosted or extreme gradient boosted trees. All this will be shown on the large dataset obtained from the Polish Credit Bureau to which all the banks and most of the lending companies in the country do report the credit files. This huge extent of data ensures high quality of the model inputs and objectivity of the conclusions.
#XAI
#CreditScoring
Data Juice Lab, University of Warsaw
Data Juice Lab
Two-Layer Approach to Combine Artificial and Human Intelligence when Labeled Data is Scarce
Building an AI solution, if the data is unlabeled and the labeling of the full data set is too expensive, is a more then complex task. In order to overcome this challenge, GfK uses a two layer approach similar to active learning. In the first step we build a model to propose a relatively small subset of the data that should be annotated by the market experts that will work with the solution. Then, to further reduce the needed involvement we build a second model on the annotations to minimize their involvement for the future. The presentation will showcase how two-layer approach helped GfK to increase data quality while minimizing the needed human labelling effort. Furthermore, we will discuss the challenges and benefits of this approach. Finally, there will be a deep dive into the code, the architecture and continuous evolution pipeline for the model.
#AI
#ML
#Twolayerappraoch
#CombineArtificialandHumanIntelligence
GfK
Topological Driven Methods for Complex Systems
We continuously produce at an unpredictable rate a large amount of heterogeneous, noisy, sparse, unstructured data. Traditional ML techniques for data mining and knowledge discovery are unsuitable for extracting valuable insights and uncover the global data structure from local observations. We need a different technique with minimal assumptions, coordinate-free, deformation invariant that provides a compressed semantic representation of the original data space. Topological methods hold great promise in solving some of the most intractable challenges of complex systems. Applications range from large-scale exploratory data analysis and data mining, inference and prediction on massive datasets, to early warning signals detection. Topological methods such as persistent homology and mapper provide a succinct description of the dynamics of multilevel multi scale, non-linear complex systems using techniques derived from algebraic topology. They have already shown efficient usage in several diverse fields such as, healthcare, computational biology, control theory, industrial fault analysis, information security, and many others. In this talk, I will present topological data analysis methods and discuss illustrative examples taken from data networks. Presentation plan: Swimming in Sensors & Drowning in data. Data has Shape and Shape has Meaning. TDA illustrative examples. What’s next ?
#TopologicalDataAnalysis
#PersistentHomology
#TopologicalInference
#Filtration
#Mapper
Cisco Systems France
From ML Research to Production - the Autobahn Way!
Germany is at the forefront of Machine Learning (ML) research in Europe whether it is published papers or issued patents in the past decade. As a consequence, theoretically you would expect a lot of ML in our products across all industries and a certain maturity of the field in Germany. The reality is however different. ML is a “Neuland” for many companies here and we are not good at taking state-of-the-art research results into production. In this talk, I will present why this is the case and then show some recipes on how to solve this problem.
Dat Tran Ventures
Implementing a ML data product for lead management with focus of the insurance industry
Connecting data to context means to have the right data, use the right algorithms and deliver the right insights in a way they play an integral role in the insurers and customers daily life. The presentation is showing how to support with data products with a focus on lead management in the insurance industry. To go the full journey of a data product the use case, the data and model evaluation, and most important the visualization and implementation will be shown using a real example. In recent years, many companies have begun to invest in AI. However, most AI projects hardly make it beyond a proof of concept and thus do not contribute to increasing competitiveness. This talk shows how to identify AI potentials in companies. We will also discuss which managerial and technical challenges can arise when implementing AI applications. You will learn a number of best practices when implementing AI and how these lead to productive applications that ultimately add financial value.
#predictiveanalytics
#conversionmodeling
#leadpriorisation
#dataproductimplementation
#visualization
Syncier Analytics
Implementing AI successfully and using it to add value to companies
Artificial intelligence (AI) is the next stage of the industrial revolution and aims to provide individual or human decision behavior in software and hardware. In recent years, many companies have begun to invest in AI. However, most AI projects hardly make it beyond a proof of concept and thus do not contribute to increasing competitiveness. How to identify AI potentials in companies? Which managerial and technical challenges can arise when implementing AI applications. Best practices for AI implementation and how these lead to productive applications that ultimately create financial value.
#AIinnovation
#InnovationmanagementforAI
#MakeAIworkforyourbusiness
skyrocket.ai GmbH
Can AI fix the global food waste problem?
The problem: How food waste negatively affects our climate
The solution: “Tech for Good” - Using an AI driven distribution platform to match time-critical perishable food oversupply with the demand
The approach: Meeting and understanding the needs of supply and demand partners
The impact: Using existing food resources using a circular economy approach, reducing unnecessary CO2 emissions (environmental impact), supporting people in need and NGOs (social impact)
#techforgood
#reducefoodwaste
#climateaction
#circulareconomy
#SDGs
SPRK.global GmbH
Scaling data function in SMB, DATA ENGINEERING
How to scale data functions from organic to hyper growth phase. In depth vision into modern data function stack for start ups. Scaling data teams. Adopting modern data technologies. Building cost efficient data solutions.
#data
#engineering
#dataengineering
#modenstack
#scaleteams
Growth
Short break
Roundtable discussion session
Parallel roundtables discussions are the part of the conference that engage all participants. It has few purposes. First of all, participants have the opportunity to exchange their opinions and experiences about specific issue that is important to that group. Secondly, participants can meet and talk with the leader/host of the roundtable discussion – they are selected professionals with a vast knowledge and experience.
1. AI is for everyone
With the advancement of tooling in the space of ML, AI is not anymore the domain of research labs. Let's talk about how AI is utilized and used in your company today and what are the best practices to leverage the value of AI for all business areas.
Google Cloud
Google Cloud
2. Implementing AI in Enterprise - questions to resolve
There are many interesting AI solutions - but how to choose the right one for your organization? Where may AI deliver the most substantial gain- how to locate these places in your business? How do implement it - it is advisable to build your in house AI knowledge or better to find a reliable partner?
Ringier Axel Springer Polska
3. Methods of detecting undesirable events based on machine learning.
Łukasiewicz Research Network - Institute of Innovative Technologies EMAG as a research institute and collaborator in research and development projects is involved in numerous and divers challenges related to artificial intelligence and machine learning. During the round table, I would like to participate in a discussion on identifying undesirable events with the use of machine learning and knowledge discovery approaches. I can share our experience in two recent case studies. The first one is for cybersecurity. In this example, ML methods are used to identify suspicious behaviour in the monitored network traffic. The second use case is for predictive maintenance. In this example, ML methods are used to identify machine failure.
Łukasiewicz Research Network - Institute of Innovative Technologies EMAG
4. Can federated learning save the world?
The usage of AI models is increasing worldwide. However, traditional AI setups with central training in a cloud or a server have several disadvantages: leak of data privacy, high latency, complex infrastructure, and high energy consumption in data centers due to cooling. Federated learning brings AI training to the data and resolves these mentioned problems by design. We will answer questions as to what federated learning is, can it be used for your project, and how can it be implemented in your existing projects?
Adap GmbH
5. What is the craze about MLOPS
Why is MLOPS now poping up? What is the role of MLOPS? Which skill set is needed in order to be a good candidate for ML Ops? Is ML Ops part of the development team or a separated team?
Gfk
Lunchtime break
Tech battle
During the battle, we fight and show the best sides of a given technology and the drawbacks of the opponent's solution. This time we take Keras that will be defended by Karol and PyTorch that will be represented by Piotr. We cover the maturity, community, usage examples, prototyping, production deployment, distributed training, and available extensions. The audience is allowed to join the battle!
Codete
Codete
Host:
Codete
Parallel tracks II
The parallel sessions are divided into three categories. Participants can choose from:
BUSINESS Session:
♦ Explainable AI and ♦ Data and Machine Learning for Managers
APPLIED MACHINE LEARNING Session:
♦ Hands-on, ♦ Computer Vision, ♦ NLP and ♦ Deep Learning
DATA Session:
♦ Data Engineering and ♦ MLOps
Harnessing the virtual realm for successful real world artificial intelligence.
Artificial Intelligence is impacting all areas of society, from healthcare and transportation to smart cities and energy. How NVIDIA invests both in internal pure research and accelerated computation to enable its diverse customer base, across gaming & extended reality, graphics, AI, robotics, simulation, high performance scientific computing, healthcare & more. You will be introduced to the GPU computing platform & shown real world successfully deployed applications as well as a glimpse into the current state of the art across academia, enterprise and startups.
#AI
#robotics
#simulation
#science
#technology
NVIDIA
User Segmentation: Conversion Likelihood Model
A classified website is an online marketplace where people can sell and buy new and used items from a wide selection of categories. The vast majority of user actions on these platforms is done between buyers and sellers without any real payment transaction. Therefore, as eBay Classified Group (eCG), we had to come up with an operational definition of conversion: if a buyer sends a message to a seller or if a seller posts a new listing on the platform, we refer to them both as "converted". The eCG is an umbrella company managing 14 different classified platforms from all across the world. Conversion numbers for the group company are tracked by a central team named 'Global Growth', which is tasked with increasing the number of active (recently converted) users on the platforms. They turned to the Data Science Team with a question: "Who will have a conversion soon?" The initial purpose of this model was to let them know in advance who is likely to convert soon, so that the marketing team would be able to take actions on users beforehand. The idea is straightforward: focus only on users who are unlikely to convert soon and try to find different ways to convince them to get back on our platform again (also: avoid allocating resources on users who will convert soon anyway, without requiring any extra targeting). This personalised marketing strategy was the main objective of this project. This is an article about an ML model developed to accomplish this objective by predicting user likelihood of conversion in the near future based on the users' past actions on a platform.
#MachineLearning
#BigData
#PredictiveModeling
#ConversionRateOptimization
#PersonalisedMarketing
Ebay Classified Group (eCG)
The Data Mesh
There have been several evolutions ongoing around data handling. First, we saw the datawarehouse. This later led to the second evolution - the datalake. Now, we see the next trend called "Data Mesh". What is the data mesh and how does it line up with the previous ones? What is different and why is it not a technology? Get the answers in this session.
UNIQA Insurance Group AG
Why Deep Learning cannot match the human brain?
Deep learning requires exponentially more resources to increase its intelligence. One can say that the intelligence of Deep Learning does not scale well. In contrast, Human brain does a much better job with a much more scalable approach. Biological intelligence scales much better: a huge increase in intelligence with a small increase in resources required. My own calculations show that a Deep Learning model that matches a human would need to have resources of the size of our galaxy. The training time would need to be several times the age of the universe. This is how big the discrepancy is between the biological approach to intelligence and that of today's AI technology.
evocenta GmbH
AI methods for predictive maintenance in production processes
The WiTraPres project (conducted by the data science lab at the FH Südwestfalen, Germany) aims at implementing a predictive maintenance module to a fully automated warm reshaping plant (which produces parts for vehicle interiors) using machine learning methods. More precisely, insights gained from data are to be combined with expert knowledge in order to achieve predictions that are as accurate as possible. In the data science component, new predictive maintenance methods are developed and applied using process data and are then compared with conventional methods. One approach aims at implementing Bayesian-LSTM-Autoencoders for anomaly detection. Here, Bayesian methods are coupled with time-resolving deep learning methods to achieve the highest possible predictive power of the model with high precision. The predictions made by this model can be effectively combined with expert knowledge. Our approach shows promising results for this case specifically and is able to compete with state of the art predictive maintenance methods.
#machinelearing
#deeplearning
#predictivemaintenance
#automotivemanufacturing
University of South Westphalia
Building End-to-End Machine Learning Workflows with Kubernetes, Kubeflow Pipelines, and BERT
Kubeflow is a popular open-source machine learning (ML) toolkit for Kubernetes users who want to build custom ML pipelines. Kubeflow Pipelines is an add-on to Kubeflow that lets you build and deploy portable and scalable end-to-end ML workflows. In this session, we show you how to get started with Kubeflow Pipelines on AWS. We also demonstrate how you can integrate powerful Amazon SageMaker features such as data labeling, large-scale hyperparameter tuning, distributed training jobs, secure and scalable model deployment using Amazon SageMaker Components for Kubeflow Pipelines.
#kubeflow
#pipelines
#bert
#nlp
#sagemaker
Amazon Web Services (AWS)
Mitigating Privacy Risks in Machine Learning through Differential Privacy
With the growing amount of data being collected about individuals, ever more complex machine learning models can be trained based on those individuals’ characteristics and behaviors. Methods for extracting private information from the trained models become more and more sophisticated, such that individual privacy is threatened. In this talk, I will introduce Differential Privacy as a powerful method for training neural networks with privacy guarantees. I will also show how to apply the method effectively in order to achieve a good trade-off between utility and privacy.
Fraunhofer AISEC
Stock Price Prediction and Portfolio Optimization Using Recurrent Neural Networks and Autoencoders
Deep learning approaches have proven powerful in modelling the volatility of financial stocks and other assets, as they are able to capture non-linearities in sequential data. I will present an analysis that helps asset managers to select, forecast and analyse different optimal asset portfolios over multiple backtest iterations. - The analysis includes the preselection of stocks using an autoencoder model, a new way to clean the sample covariance matrix that describes the risk of a portfolio, a ten day forecast using recurrent neural networks and the final portfolio optimization.
#DeepLearning
#QuantativeFinance
#Autoencoders
#PortfolioOptimization
BIVAL GmbH
Measuring success of Data Teams
We investing a lot of effort on building data teams, scalable architecture, data engineering, BI, analytics and Machine learning. But are we doing the right things? How much are we actually contributing to the organisation? Are we focusing on the right things? In this session I will share my experience on how to measure success in data teams and the benefit of doing so.
Taxfix
Short break
DATA Session: ♦ Data Engineering and ♦ MLOps
From gut feeling to algorithm: Leveraging AI to transform the product distribution in the insurance industry
The insurance industry with its established players is changing, and young InsurTech companies like Lemonade are entering the market. How can the established insurers keep up and stay technically on the ball? Numerous insurance companies are already working on setting up their own platforms. However, this requires a lot of know-how and is time-consuming and cost-intensive. The use of special AI-based solutions is much more efficient. Appropriate tools can significantly optimise the sales and distribution of products and services in the insurance industry. By using artificial intelligence, not only can documents be automatically checked and processes accelerated, but real time intelligent suggestions can also enable employees to provide even better advice – and thus increase customer satisfaction and contract closings. The lecture will show which opportunities are offered to insurance companies by AI-based solutions, which limits the technologies currently have, and how they can be implemented as efficiently as possible in insurance companies.
Zelros
AI-based chatbots: opportunities, possibilities, limitations.
During the Covid 19 pandemic, the volume of requests via digital channels has increased massively. One solution to ensure excellent service without overburdening staff:in: AI-based chatbots. Intelligent bots can handle standard inquiries quickly and automatically. However, this does not always work properly: almost 92 percent of interactions with bots in Germany contain swear words and insults.
Jens Leucke is therefore happy to explain:
• for which areas of application AI-based chatbots are currently being considered and for which they are not
• which technological aspects and functions are particularly relevant for chatbots
• which strengths and weaknesses AI currently still has in this area - and how the technologies can be further improved
• which steps need to be taken when implementing AI-based solutions in customer service in order to significantly improve the user experience
freshworks
Personal bandit or: how to give users what they want (on a budget)
Everybody talks about artificial intelligence and machine learning these days, but how many of the solutions mentioned genuinely involve a **machine** that is **learning**? There are multiple results being published about e.g. DeepMind beating Starcraft - and most Atari games have been mastered by RL algos a while ago - but about actual business applications? In this talk, I will give an overview of how you can apply associative reinforcement learning to a real problem: online content personalisation (and stay within budget, because open source).
eBay Classifieds Group
Knowledge Graph Based Entity Similarity Learning
For about the last decade, Knowledge Graphs have been sneaking into our daily lives, be it through voice assistants (such as Alexa, Siri or Google Assistant), intuitive search results or even personalized shopping experiences through online store recommenders. We are constantly interacting with Knowledge Graphs on a daily basis. In this presentation, we will present how we have used the Knowledge Graphs to find the similarity between various entities.
#Knowledgegraphs
#Ontology
#Machinelearning
#ArtificialIntelligence
#NaturalLanguageProcessing
Delivery Hero SE
Microsoft
Application of time series forecasting and optimization tools for management of cash supply chain.
How to optimize deliveries of cash to bank branches? Solution consists of two main elements: predictor – predicts future cash levels, optimizer – creates optimal cash delivery plan based on predicted cash levels and specific business requirements. Presentation starts with high level description of our AI development and operating model as well as tools which we use. Than specific requirements in cash supply chain. Our approach to cash level forecasting and various ways of solving optimization problems, which finally led to creation of meta-model – solution that aimed to choose best optimization parameters based on business metric due to pitfalls coming from applying “common sense” and “basic logic” to optimization problem.
BNP Paribas
BNP Paribas
Aerial Remote Maintenance
Creating the future of remote maintenance is strongly connected with aerial applications of drones. In the industries of energy, construction and facility management, drones are a fully established tool for various tasks. FlyNex enables companies to automatically generate and collect data. By managing the data within the FlyNex Platform companies can analyze them with a chosen (AI-based) software, which can be fully integrated into the platform.
#aerialremotemaintenance
#drones
#AI
#FlyNex
FlyNex GmbH
From zero to hero – how to build open source platform for enterprise users
Having open source tools in the company is not optional these days - it is necessary. Within a few months, we went from collecting requirements through the sandbox to the platform on which ML and AI solutions are built and implemented. During the presentation, we would like to tell you how to optimally plan such a process, what to pay special attention to and how to combine the work of a broadly understood business with the support of IT departments. It can be confusing to start building an ML and AI platform from a blank page. It is worth dividing the entire process into appropriate steps in order to efficiently build and implement analytical products at the end. During the presentation, we will tell you about: (1) how to collect the requirements, (2) how to divide the implementation work and (3) what is crucial in the smooth overcoming of the next stages.The platform was entirely based on Kubeflow, and in addition, applications such as MLflow, Jenkins and others were used. Not everything appeared immediately and it turned out to be crucial for the smooth transition between the successive stages of the platform's development. This is one of the most important lessons learned from the entire process of building and implementing the platform that we would like to tell you about.
#kubeflow
#MLOps
#deployment
#successstory
#lessonslearned
Bank Millennium
Bank Millennium
Special meeting
Panel discussion - Does AI need state support?
An increasing amount of funds and advancing efforts are put into AI worldwide, while a big part of this operates from public sources. Why is it like this and where and how make it meaningful - at development, proliferation and deployment? To what extent governments should get involved here - to make things go faster but not to spoil the natural, free-market forces? Is there now such a "competition" between particular countries in the World? Is it good for all of us? What seems to be most effective and needed now?
The panel will be joined by:
Digital Poland
Jagiellonian University, KnowAI.eu
University College London
Israel Innovation Authority
ValueWorks GmbH
11th June
WORKSHOPS DAY
In each round there will be a slot for complete 4 hours workshop of your choice. Detailed description is available here.
Round I
AI FOR MANAGERS
Many data science or machine learning projects fail due to mistakes done during the project development. We can group the mistakes into a few most popular that if known earlier can make your project successful. The training also gives a better understanding of the topic of artificial intelligence for technical and non-technical managers. We go through the process of the AI transformation and show a few tips on how to make the transformation easy.
Codete
MACHINE LEARNING SECURITY
In the days where we have autonomous cars, drones, and automated medical diagnostics, we want to learn more about how to interpret the decisions made by the machine learning models. Having such information we are able to debug the models and retrain it in the most efficient way. This training is dedicated to managers, developers and data scientists that want to learn how to interpret the decisions made by machine learning models.
Codete
Extra workshop
Extra workshop
LOW CODE AI ON GOOGLE CLOUD PLATFORM
Advanced AI models need experienced data scientists with a deep knowledge about modeling area. Moreover, building and testing can take a lot of time, when problem to solve is very complicated. During this workshop we will present 5 tools delivered by Google Cloud Platform which makes it possible to build AI models with almost no code. Google Cloud Platform allows not only to build simple models by no-data scientist but also save time of experienced ML scientists because they can build prototypes faster and test assumptions before starting big project. We will present the following tools:
• ML APIs
• AutoML Vision
• AutoML Tables
• BigQuery ML
• Dialogflow
Chmura Krajowa
Chmura Krajowa
Round II
EXPLAINABLE AI
Neural networks are currently the most popular machine learning methods. One type of use cases are based on pattern recognition on images. Neural networks as any other solution is liable to security issues. In this training we go through potential leaks and vulnerabilities of neural networks. This training is dedicated to managers and data scientists that want to learn more on how to find leaks and secure a neural network.
Codete
BUILDING AND OPERATING AN OPEN SOURCE DATA SCIENCE PLATFORM
There are many great tutorials for training your deep learning models using TensorFlow, Keras, Spark or one of the many other frameworks. But training is only a small part in the overall deep learning pipeline. This workshop gives an overview into building a complete automated deep learning pipeline starting with exploratory analysis, over training, model storage, model serving, and monitoring.
ArangoDB