• staze01

Program staży - staże krajowe

Stypendia stażowe przeznaczone są dla młodych naukowców (do 35 roku życia) zainteresowanych tematyką technologii informacyjnych. Stażysta może zgłosić się do Komisji ze swoim gotowym projektem lub wybrać któryś z tematów stażu zaproponowanych przez Zarząd ISD.

Staże dla doktorantów ISD i pracowników instytutów realizujących projekt mogą być realizowane w dowolnym ośrodku krajowym po zatwierdzeniu przez komisje rekrutacyjną.

Staze dla pozostałych stażystów realizowane są pod opieką pracownika naukowego jednego z instytutów realizujących Projekt.

Opiekun kandydata powinien wspólnie z przyszłym stażystą opracować, a następnie przedłożyć Komisji Rekrutacyjnej harmonogram stażu. Poza harmonogramem stażu i opisem projektu stażysta składa przed Komisją pozostałe dokumenty aplikacyjne, tj. podanie, wniosek o staż, list motywacyjny, życiorys, udokumentowane osiągnięcia naukowe, rekomendacje oraz  zgodę instytucji, w której będzie realizował staż na odbycie stażu.

Po uzyskaniu pozytywnej decyzji Komisji Rekrutacyjnej stażysta podpisuje dwie umowy, jedną stypendialną, pomiędzy nim a Kierownikiem ISD, druga umowa jest trójstronna, pomiędzy stażystą, Dyrekcją IPI PAN jako stroną kierującą na staż i Dyrekcją Jednostki Naukowej, w której będzie realizowany staż. Obydwie umowy szczegółowo określają uprawnienia i obowiązki wynikające z odbywania stażu.

W czasie trwania stażu, stażysta zobowiązany jest do prowadzenia Dziennika stażu oraz podpisywania listy obecności w miejscu pracy. Obydwa te dokumenty (Dziennik stażu i lista obecności) wraz z comiesięcznym sprawozdaniem, podpisanym przez opiekuna stażu powinny być przekazywane do dziekanatu, stanowią one podstawę do wypłaty stypendium stażowego. Po zakończeniu stażu, każdy jego uczestnik otrzymuje świadectwo stażu.

Staże krajowe - stypendium stażowe wynosi do 3 500 PLN/m-c dla magistrów i doktorantów, 4 500 PLN/m-c dla młodych doktorów.

Warunki rekrutacji

Pobierz wzory dokumentów

Proponowane tematy staży:

Automatic Interior Layout Design (dr S. Chojnacki)



altThe purpose of this project is to develop a tool that automatically divides an interior into rooms. The problem has been addressed recently by Merrel. It is similar to other layout design problems e.g. designing content of websites, locations of paragraphs and images in documents or furniture in rooms.
The intern will be required to propose a new technique of solving the problem. Implement the algorithm in Java as SweetHome3D Plugin. Conduct evaluation of working tool with industry users. And present the results during a seminar in ICS PAS.

Resources:
 
 

Dynamics of cortical-hippocampal interactions during consolidation of remote memory in Rodents

Pierre Meyrand PhD CNRS

The goal of this project is to elucidate cortical-hippocampal interactions underlying remote memory formation in awaked and behaving animals. To reach this aim the activation patterns of cortical and hippocampal networks involved in the processes of encoding, storage and retrieval of memory in rodents will be studied using multi-electrode arrays (MEA) approaches.

Complex interactions between neocortex and hippocampus are the neural basis of memory formation. Immediately after learning, memories are labile, that is, subject to interference and trauma, but later they are stabilized, such that they are not disrupted by the same interfering events. The identification of cortical and hippocampal structures involved in memory encoding, consolidation and retrieval will be estimated using activity dependent genes c-fos and zif268 classically used as indirect correlate of neural activation. Subsequent multi-channel recordings of neuronal activity from underlying structures will be performed in freely moving animal engaged in remote memory task.

We are looking for a candidate for a PhD study, or post-doc training who would participate in the above project. The main aim of the project will be focus on the electrophysiological data analysis. For that reason the candidate should have knowledge and practical experience in computer science. For a better understanding of the experimental paradigm, the student could participate in behavioural experiments and identification of brain structures involved in memory processing using immunohistological techniques, as well as in acquisition of electrophysiological data.

The project will be realized in collaboration with Prof. Tiaza Bem from IBIB PAS and Profs Pierre MEYRAND and Bruno BONTEMPI from the Institut des Maladies Neurodégénératives (IMN) at Bordeaux2 University and CNRS.

„New model checking methods for hybrid systems”

prof. Wojciech Penczek, ICS PAS

Hybrid systems (HS) can be viewed as generalisations of Real Time Systems (RTS). The theoretical analysis of HS as well as the methods of their formal verification follow advances in the approaches for RTS. However, for HS the main obstacle is manifested by high complexity of the problems involved. In 1996, Henzinger established a theory of hybrid automata. Several papers followed the resulting model checking approach, accompanied by implementations. The conclusion was that model checking of hybrid automata is feasible, but suffers heavily from the state space explosion problem and more research is needed. The further research focused on efficient representations for representing equivalence classes of the variable values, combined with encoding of the discrete parts, and using external tools for solving sub-problems such as computing the preimages. In the recent years, SAT solvers have been extended to Satisfiability Modulo Theorem (SMT) solvers, the tools combining traditional SAT solving with decidable domains like Presburger arithmetics. Concerning the representations, several variants of DDs have been developed, capable of representing linear constraints. However, recently an alternative representation of And-Inverter-Graphs (AIGs) has become quite popular, being more compact compared to DDs in many applications. Moreover, several new heuristics and metaheuristics are available for solving hard problems in Artificial Intelligence.

The project consists in examining to which extent the advances in non-hybrid symbolic model checking and in solving hard problems in AI can be generalised to hybrid systems.

Our objectives in the project include the following four tasks:

  1. Analysis of the existing verification methods for HS,
  2. Analysis of the existing verification methods for RTS,
  3. Analysis of the novel heuristics and metaheuristics in AI,
  4. Application of the selected methods for RTS and from AI to HS.

“Random Subset Method for Regression”

prof. Jan Mielniczuk, ICS PAS

Random Subset Method (RSM) is a powerful method of model choice in classification when the problem is high dimensional. In the RSM a random subset of features is chosen having cardinality m smaller than the number of potentially useful features M and the problem is solved in the reduced feature space of the selected predictors. Features under consideration are assigned weights based on their performance in the constructed solution. The procedure is repeated several times, cumulative weights of predictors are calculated and the selection of predictors is then based on them. The Random Forest method is the most known example of such approach.

The project concerns extension of the RSM to regression problems with quantitative response. The following topics will be researched: construction of appropriate measures of variables’ performance tailored to regression problem considered, data dependent choice of cardinality m of the small model chosen and weighted versions of the method when inclusion of variables into the small model depends on a preliminary assessment of their performance.

Literature:

  1. T.K. Ho. The random subspace method for constructing decision forests. IEEE Trans. Pattern Anal. Machine Intell., 832-844, 1998.
  2. P. Buhlmann, M. Kalisch, and M.H. Maathuis. Variable selection in high-dimensional linear models: partially faithful distributions and the pc-simple algorithm. Biometrika, 261-278, 2010.
  3. J. Fan and J. Lv. Sure independence screening for ultra-high dimensional feature space (with discussion). Journal of the Royal Statistical Society B, 849-911, 2008.

“Monitoring data sets using the methods of Statistical Process Control (SPC)”

prof. Olgierd Hryniewicz, SRI PAS

Contemporary methods of data acquisition let build dynamically changing data sets with complicated structures. Methods of data mining allow to analyze such structures, and to use the acquired knowledge for practical purposes. In the case of large data sets the change of the structure of newly acquired data may not influence the structure of the whole data set. However, such information may be very important in practice. When we describe the structure of a data set with a certain one- or multivariate index, we may assume that that in case of stable data the values of such an index calculated for new small data sets may vary purely randomly. Therefore, there is a need to discriminate between pure random variation of such an index (indices) and its statistically significant change. The methodology of Statistical Process Control (SPC), that has been developed during the last 80 years, provides simple and easy to understand (very important in practice!) methods for the analysis of the parameters of discrete production processes. One can think that this methodology could be used for the analysis of possible changes in the structure of data sets. When parameters of the monitored production process are changing the SPC procedures generate alarms. In the context of the analysis of the structure of data sets such an alarm may indicate the need for a new detailed analysis of the whole available data.

Problems to be solved:

  1. Proposal od indices describing the structure of a data set (Hint: to use methods known from the cluster analysis).
  2. Proposal of statistical methodology to deal with such data (Hint: to use statistical sequential methods).
  3. Preliminary test of the proposed methodology using real data, e.g. acquired from the Internet.

“The method of Maximum Likelihood on data streams”

Prof. Szymon Jaroszewicz, ICS PAS

In recent years we can observe an exponential growth of the amount of data being collected. That growth made it necessary to develop new analytical approaches capable of handling such vast amounts of data. One of those approaches is the so called data stream analysis, where every record is seen exactly once, and immediately discarded; the data are not stored, but treated as a continuous stream. The method of maximum likelihood is one of the most important statistical methods which is used to find the parameters of many popular models such as linear or logistic regression. It thus seems natural, that Maximum Likelihood based methods should be adapted to the data stream setting. Unfortunately, the solutions currently available, based on methods of stochastic approximation, are very inefficient. This project will aim at developing more efficient algorithms for the implementation of the Maximum Likelihood methods on data streams.

Literature:

  1. S. Pang, S. Ozawa, N. Kasabov, Chunk Incremental LDA Computing on Data Streams, Advances in Neural Networks, Springer 2005
  2. S. Muthukrishnan, Data Streams and Applications
  3. H. Kushner and G. Yin, Stochastic Approximation Algorithms and Applications, Springer, 1997

“Implementations and theoretical properties of efficiently computable grammar-based codes”

Dr Łukasz Dębowski – ICS PAS

Grammar-based coding is a method of lossless data compression originally developed for compression of texts in natural language. The method consists in constructing a possibly smallest context-free grammar that generates the text as its only production. It has been proved that the global grammar minimization is an NP-hard problem [1] but many interesting algorithms have been proposed that consist in local grammar minimization [1,4,5,6]. Some of these codes are universal, i.e., their asymptotic compression rate equals entropy rate for any stationary process [3,4].

The aim of this project is to develop new methods of grammar-based compression, with an emphasis on admissibly minimal codes proposed in [3]. We suppose that admissibly minimal codes are particularly good at compression because the excess length of such a code is bounded by the number of nonterminal symbols in the grammar [3]. An interesting research problem is whether there exists an admissibly minimal grammar-based code which is computable in the polynomial time. If so, this code would be worth implementing and checking its properties in practice.

The successful candidate should be familiar with book [2].

References:

  1. M. Charikar, E. Lehman, A. Lehman, D. Liu, R. Panigrahy, M. Prabhakaran, A. Sahai, and A. Shelat, "The smallest grammar problem," IEEE Transactions on Information Theory, vol. 51, pp. 2554­2576, 2005.
  2. T. M. Cover, J. A. Thomas, Elements of Information Theory, 2nd ed., Wiley, 2006.
  3. Ł. Dębowski, "On the Vocabulary of Grammar-Based Codes and the Logical Consistency of Texts," IEEE Transactions on Information Theory, vol. 57, pp. 4589-4599, 2011.
  4. J. C. Kieffer and E. Yang, "Grammar-based codes: A new class of universal lossless source codes," IEEE Transactions on Information Theory, vol. 46, pp. 737­754, 2000.
  5. C. G. de Marcken, "Unsupervised language acquisition," Ph.D. dissertation, Massachussetts Institute of Technology, 1996.
  6. C. G. Nevill-Manning, "Inferring sequential structure," Ph.D. dissertation, University of Waikato, 1996.

DrukujEmail

  • footer
    Projekt współfinansowany ze środków Unii Europejskiej w ramach Europejskiego Funduszu Społecznego