Skip to main content
Filter by
Sorted by
Tagged with
0 votes
1 answer
181 views

When I use ThreadPoolExecutor, I can send a requests batch with limitation of parallel requests like this: with ThreadPoolExecutor(max_workers=MAX_PARALLEL_REQUESTS) as pool: results = list(pool....
andre487's user avatar
  • 1,409
0 votes
1 answer
508 views

I have the following code blocks, each in JupyterLab (this works as expected in Jupyter, but not Jupyter-Lab): def work(sleepTime): import time import datetime start = datetime.datetime....
Jared's user avatar
  • 26.3k
0 votes
1 answer
555 views

I am going through the tutorial to learn ipyparallel and while doing so, I got the error: AttributeError: module 'ipyparallel' has no attribute 'Cluster' I uninstalled and reinstalled the package but ...
JR222's user avatar
  • 1
1 vote
0 answers
142 views

I'm trying to use the function remove_cluster of ipyparallel.ClusterManager() import ipyparallel as ipp cluster = ipp.Cluster(n=2) # start cluster syncronously cluster.start_cluster_sync() # <...
user2314737's user avatar
  • 29.7k
4 votes
1 answer
4k views

What are these zombie ipykernel_launcher process in my machine, which are hogging to much memory: This is output of htop command, but I ps for those processes,(to kill them) I do not see them as: ps ...
sandeepsign's user avatar
0 votes
1 answer
147 views

I am trying to get workers to output some information from their ipython kernel and execute various commands in the ipython session. I tried the examples in the documentation and the ipyparallel ...
SultanOrazbayev's user avatar
1 vote
2 answers
5k views

I have this code which I would like to use multi-processing to speed up: matrix=[] for i in range(len(datasplit)): matrix.append(np.array(np.asarray(datasplit[i].split()),dtype=float)) The ...
galaxygal's user avatar
2 votes
1 answer
621 views

As far as I understand, an ipython cluster manages a set of persistent namespaces (one per engine). As a result, if a module that is imported by an engine engine_i is modified, killing the main ...
Ash's user avatar
  • 4,778
2 votes
1 answer
1k views

I'm trying to follow these instructions on using ipyparallel in order to speed up some Python scripts in Jupyter Notebook. When doing import ipyparallel as ipp IPPC = ipp.Client() I get the following ...
sarah.connr's user avatar
2 votes
2 answers
756 views

I'm trying to follow these instructions on using ipyparallel in order to speed up some Python scripts in Jupyter Notebook. When doing import ipyparallel as ipp IPPC = ipp.Client() I get the following ...
ajewzaliefff's user avatar
10 votes
2 answers
476 views

I am using the ipyparallel module to speed up an all by all list comparison but I am having issues with huge memory consumption. Here is a simplified version of the script that I am running: From a ...
Pierre Joubert's user avatar
1 vote
0 answers
49 views

I know similar questions on this topic have been asked before, but I'm still struggling to make any headway with my problem. Basically, I have three dataframes (of sizes 402 x 402, 402 x 3142, and 1 ...
mikdale's user avatar
  • 11
1 vote
1 answer
1k views

I have something like this outputs = Parallel(n_jobs=12, verbose=10)(delayed(_process_article)(article, config) for article in data) Case 1: Run on ubuntu with 80 cores: CPU(s): 80 ...
suprita shankar's user avatar
1 vote
1 answer
286 views

I want to to do nested parallelism with scikit learn logisticregressionCV inside a for loop: for i in range(0,10): logisticregressionCV(n_jobs=-1) I want to parallelize the for loop as well. I ...
Aakash saboo's user avatar
1 vote
0 answers
659 views

I have been trying to read a large file and writing to another file at the same time after processing the data from the input file, the file is pretty huge around 4-8 GB, is there a way to parallelise ...
Harsh Sharma's user avatar
3 votes
0 answers
412 views

I implemented a parallel function using the ipyparallel library. I would like to ask you if there is a way to print from the engine to the main process? Here what I'm trying to do: import ipyparallel ...
Borreguin Sanchez's user avatar
0 votes
1 answer
3k views

I made this timer widget that works the way I want, but it locks up the notebook so I can't execute other code cells at the same time. Here's an example of what I mean: Any ideas for getting the ...
David Skarbrevik's user avatar
3 votes
1 answer
1k views

I have a loop which does a bunch of CPU intensive calculations and appends the result to a list, per iteration. How can I have that working in parallel. On C# there are concurrent containers, how ...
Abel Dantas's user avatar
3 votes
0 answers
478 views

Within an ipython notebook, using Jupyter I try to do a calculation, thats running for an extended period of time in a while loop. I use pyplot to show the current status of my calculation and I would ...
FMey's user avatar
  • 31
1 vote
1 answer
271 views

Speaking of ipyparallel, is it possible to specify a number of ipengines to simultaneously launch on a slave machine, and if so - how do I do it? For example, one can specify a number of engines to ...
Vasily's user avatar
  • 2,554
1 vote
0 answers
203 views

I originally wrote a nested for loop over a test 3D array in python. As I wanted to apply it to larger array which would take a lot more time, I decided to parallelise using ipyparallel by writing it ...
mallowcodes's user avatar
1 vote
0 answers
424 views

I have set up a cluster with ipyparallel on my local machine and wish to convert it to a dask cluster. My primary motivation for this is to be able to use the web ui with dask (via bokeh) to monitor ...
Rowan_Gaffney's user avatar
4 votes
2 answers
2k views

I want to dynamically start clusters from my Jupyter notebook for specific functions. While I can start the cluster and get the engines running, I am having two issues: (1) I am unable to run the ...
RRC's user avatar
  • 1,432
6 votes
0 answers
524 views

I am trying to start a ipyparallel cluster using MPI. The ipcluster_config has following lines modified as such: c.MPILauncher.mpi_cmd = ['mpiexec'] c.MPIControllerLauncher.controller_args = ['--...
Kabira  K's user avatar
  • 2,065
1 vote
0 answers
127 views

I have a custom Jupyter kernel. It is specified using a kernel.json kernelspec. An extract from kernel.json shows how to start the custom kernel: source /opt/conda/bin/activate gdb_ipykernel &&...
dwjbosman's user avatar
  • 966
6 votes
1 answer
4k views

I have the need to use my model to do predictions in batches and in parallel in python. If I load the model and create the data frames in a regular for loop and use the predict function it works with ...
RDizzl3's user avatar
  • 846
3 votes
0 answers
361 views

Trying out ipyparallel, and getting stuck on the very early step of pushing code to my engines. Started up my engines with: ipcluster start -n 4 Then try out putting things on the engines with: ...
I.P. Freeley's user avatar
1 vote
0 answers
199 views

Thinking of using ipyparallel to develop a machine learning algorithm on a cluster, mainly using pandas, scikit-learn and numpy. What are the recommended debugging techniques. Is it possible to ...
Emre Tezel's user avatar
1 vote
0 answers
603 views

So the case is the following: I wanted to compare the runtime for a matrix multiplication with ipython parallel and just running on a single core. Code for normal execution: import numpy as np n = ...
SourBitter's user avatar
2 votes
0 answers
341 views

This issue may be related to https://github.com/ipython/ipyparallel/issues/207 which is also not marked as solved, yet. I also opened this issue here https://github.com/ipython/ipyparallel/issues/286 ...
LFish's user avatar
  • 297
0 votes
1 answer
335 views

This might be a naive question but I've really tried searching multiple resources: multiprocessing and ipyparallel but these seem to be lack of appropriate information for my task. What I have is a ...
Zhiya's user avatar
  • 620
1 vote
0 answers
239 views

I came across this issue while using ipyparallel. When I try to use cloudpickle, it appears I cannot push or pull globals anymore. Does anybody know the reason or a way around this? In general, I ...
user1620443's user avatar
3 votes
2 answers
2k views

I've just started using ipyparallel, I'm using VS2017 and importing it as; import ipyparallel as ipp And then attempting to start it using; def main(): rc = ipp.Client() if __name__ == "__main__"...
Robbie Cooper's user avatar
0 votes
0 answers
42 views

I want to run a simple function across seperate cores on my computer. My computer has four cores. To start with, a simple function: def exp(x): return x**2 now I want to give this function ...
Stefano Potter's user avatar
3 votes
0 answers
500 views

I would like to import a class, lets call it MyClass, and assume it is stored in a file MyClass.py. However, the class itself depends on various files distributed over several folders. So if I want to ...
user56643's user avatar
  • 383
0 votes
0 answers
479 views

(This is my first time of asking a question, if you think the tags or description have something wrong, please tell me, thank you!) I'm doing a work on Matrix Factorization and use the module sklearn....
WooLaw Ren's user avatar
0 votes
1 answer
783 views

I have a python script that contains a number of user defined functions that I have set up a as package locally. I can run all of the functions in the input_processing.py script except for the one ...
Ben Miller's user avatar
1 vote
1 answer
641 views

https://code.tutsplus.com/articles/introduction-to-parallel-and-concurrent-programming-in-python--cms-28612 From this link I have studied, I have few questions Q1 : How thread pool (Concurrent) and ...
NarenSuri's user avatar
0 votes
1 answer
230 views

Using Windows / ipython v6.0.0 I am running ipcontroller and a couple of ipengines on a remote host and all appears to work fine for simple cases. I try to adjust the pythonpath on the remote host (...
user1515250's user avatar
0 votes
1 answer
210 views

Suppose I have two Python files test.py from ipyparallel import Client def hi(a): return b + (a * 2) def run(): b = 3 client = Client() view = client[:] view.push({'b':b}) ...
gman9732's user avatar
0 votes
1 answer
80 views

A couple of questions related to best practices with ipyparallel. I'm attempting use it to implement a Monte Carlo framework for a model that takes ~15 to run. The idea is to run N engines (via SLURM) ...
Rich Plevin's user avatar
4 votes
0 answers
184 views

When using an ipyparallel cluster to process tasks in parallel, how do I iterate over the AsyncMapResult when some of the tasks have raised an exception? All I get is the exception, but I can't ...
Kal's user avatar
  • 1,729
1 vote
1 answer
300 views

I am learning parallel computation in ipython. I came across an example, from ipyparallel import Client rc = Client() rc.block = True print(rc.ids) def mul(a,b): return a*b dview = rc[:] print(...
Kris's user avatar
  • 9
3 votes
0 answers
205 views

I am running iPython Parallel. Every time I restart the main notebook kernel, I have to restart the cluster as well. Without doing so, I get a NoEnginesRegistered error (in other words, the ...
cjm2671's user avatar
  • 19.6k
6 votes
0 answers
3k views

I'm trying to parallelize the GridSearchCV of scikit-learn. It's running on a jupyter (hub) notebook environment. After some research I found this code: from sklearn.externals.joblib import Parallel, ...
ScientiaEtVeritas's user avatar
4 votes
2 answers
703 views

I'm just wondering if there is some python code or magics I can execute that will restart the ipython cluster. It seems like every time I change my code, it needs to be restarted.
cjm2671's user avatar
  • 19.6k
1 vote
0 answers
194 views

I've got a cluster of engines running. When I set them to work on a long running calculation, after a couple of minutes they just seem to 'die' silently, one by one, until the calculation stalls ...
cjm2671's user avatar
  • 19.6k
0 votes
1 answer
967 views

I'm trying to get a basic ipyparallel environment working using mpi4py as described in the ipyparallel documentation. After starting the ipcluster, I load ipython and try to create a client but it has ...
m3wolf's user avatar
  • 114
0 votes
1 answer
120 views

Looking to make the following code parallel- it reads in data in one large 9gb proprietary format and produces 30 individual csv files based on the 30 columns of data. It currently takes 9 minutes per ...
red79phoenix's user avatar
1 vote
1 answer
96 views

When running this code, I have this error : from ipyparallel import error, AsyncHubResult, DirectView as dv, Reference @dv.parallel(block=True) def np_std_par(x): return np_std(x) TypeError: ...
tensor's user avatar
  • 3,390