input
stringlengths
1
18.7k
output
stringlengths
1
18.7k
let me check
likewise
likewise
<@U0269UVGBFE> can you share your email address
<@U0269UVGBFE> can you share your email address
abbette at <http://amazon.com|amazon.com>
abbette at <http://amazon.com|amazon.com>
OK I found the email Sorry, it was buried.
OK I found the email Sorry, it was buried.
got it <@U0269UVGBFE> I will send you an email shortly; just generating credentials <@U0269UVGBFE> you should have an email in your inbox, let me know if you don't or if the creds don't work
got it <@U0269UVGBFE> I will send you an email shortly; just generating credentials <@U0269UVGBFE> you should have an email in your inbox, let me know if you don't or if the creds don't work
<@U024C17HKQW> I haven't received that email either. Could you please send that email again?
<@U024C17HKQW> I haven't received that email either. Could you please send that email again?
I’m in
I’m in
<@U025GJJNSUF> can you check spam?
Somehow I can't log in anyscale, saying organization raysummit2021 not working?
<@U026A081BKN> what error are you getting? make sure you use <https://anyscale-dev.dev/login> as the URL
<@U026A081BKN> what error are you getting? make sure you use <https://anyscale-dev.dev/login> as the URL
<@U026430BG1W> the error of raysummit2021 as a public organization not permitted is now gone. I'm able to log in
<@U026430BG1W> the error of raysummit2021 as a public organization not permitted is now gone. I'm able to log in
Great, glad to hear :slightly_smiling_face:
What type of data can be put in the object store? Should the data be serializable?
anything that is serializable by cloudpickle works, and you can also define your own serialization function if they are not see also <https://docs.ray.io/en/master/serialization.html?highlight=serialization#customized-serialization>
Hi. What will happen if regular function will try to access remote object?
If you just pass a remote object into a regular function, it will be of type `ObjectRef`, you can call ray.get on it to get the actual object.
If you just pass a remote object into a regular function, it will be of type `ObjectRef`, you can call ray.get on it to get the actual object.
thank you so wrapping function with remote decorator just unwraps ObjectRefs for user - to not worry about unwrapping manually
thank you so wrapping function with remote decorator just unwraps ObjectRefs for user - to not worry about unwrapping manually
that's right, it automatically unwraps (and also calls the function remotely so you can run many at the same time)
that's right, it automatically unwraps (and also calls the function remotely so you can run many at the same time)
in that case is it redundant to call obj_ref = counter.increment.remote() in the last example? or rather, is the “remote” part of the call redundant?
in that case is it redundant to call obj_ref = counter.increment.remote() in the last example? or rather, is the “remote” part of the call redundant?
the remote part is to point out to the programmer that the call will actually be remote, it is more of an API convention rather than technically necessary in fact a very very early version of Ray at the beginning didn't have it :smile:
the remote part is to point out to the programmer that the call will actually be remote, it is more of an API convention rather than technically necessary in fact a very very early version of Ray at the beginning didn't have it :smile:
It looks like it’s not optional though
It looks like it’s not optional though
but if you leave it out you do get an error
but if you leave it out you do get an error
got it This seems to add effort to the process of adapting code for Ray – is that true, and how did you think about the tradeoff?
I didn't receive any emails related to the login and password. my email address is <mailto:[email protected]|[email protected]>. ( I signed up and paid by master card just before the meeting got started on June 22).
looking sent <@U026J5JBTNU> let me know if you recd
what does multiprocessing pool offers in addition to remote function?
it is a drop in replacement for the standard multiprocessing pool that allows you to scale out on a cluster, the reason we have it is so you can run it with existing code that is already implemented with multiprocessing pool if you write new code, remote functions are more flexible
it is a drop in replacement for the standard multiprocessing pool that allows you to scale out on a cluster, the reason we have it is so you can run it with existing code that is already implemented with multiprocessing pool if you write new code, remote functions are more flexible
and faster?
and faster?
and simpler yeah :slightly_smiling_face:
and simpler yeah :slightly_smiling_face:
cool
does anyone know whats the best way or resources for using numba/jax jit llvm compiled functions inside ray actors / remote functions?
i don’t know of any specific resources for those libraries, but in general, Ray does its best to not interfere with other libraries, so you should be able to invoke numba/jax within your remote function/task.
Where is the link to the "Ray Dashboard" in the jupyter notebook?
it is in the cluster overview (the page after you clicked on cluster-0 in the Ray-Tutorial project) click on projects / Ray-Tutorial / cluster-0 / Dashboard at the top
Is there way to do a two dataset join with Parallel Iterators?
paralellel iterators support a `union` operation to combine iterators, but there’s no `join` api in the SQL sense.
paralellel iterators support a `union` operation to combine iterators, but there’s no `join` api in the SQL sense.
thank you. Maybe there are some other higher level API (maybe third party) available do to join/group by over datasets?
thank you. Maybe there are some other higher level API (maybe third party) available do to join/group by over datasets?
yes! if you represent your dataset with pandas, pyspark, or dask, you can use modin, RayDP (Spark on Ray), or dask-on-ray which all have join/group by operations
yes! if you represent your dataset with pandas, pyspark, or dask, you can use modin, RayDP (Spark on Ray), or dask-on-ray which all have join/group by operations
great. I'll take a look on both and review related Ray Summit sessions
what's the fifference between shard and batch?
batching: take a bunch of small data and combine them into a bigger chunk that takes less overhead to process shard: take some data (might be batched) and assign it to a different machine/processor to execute in parallel
do you guys have example with asyncio and with Ray
Here is some info with Ray + Asyncio! <https://docs.ray.io/en/master/async_api.html>
Is there any examples using PyCaret library with Ray?
I’m not super familiar with PyCaret but here is a blog by <@U025ETR0UEN> <https://medium.com/distributed-computing-with-ray/bayesian-hyperparameter-optimization-with-tune-sklearn-in-pycaret-a33b1592662f>
I’m not super familiar with PyCaret but here is a blog by <@U025ETR0UEN> <https://medium.com/distributed-computing-with-ray/bayesian-hyperparameter-optimization-with-tune-sklearn-in-pycaret-a33b1592662f>
Hey <@U026114JRDK> Ray is not fully integrated with PyCaret but PyCaret provides support for distributed HPO with Ray Tune, as outlined in the blog post Ian shared. If you have any questions in regard to that please let me know, happy to help
Can I create a Ray Cluster of two laptops to use more cores locally?
It’s possible. You can call `ray start --head` on your first laptop and call `ray start --address=&lt;first machine&gt;` on the second, or use the ray autoscaler for private clusters <https://docs.ray.io/en/releases-0.8.5/autoscaling.html#private-cluster>
It’s possible. You can call `ray start --head` on your first laptop and call `ray start --address=&lt;first machine&gt;` on the second, or use the ray autoscaler for private clusters <https://docs.ray.io/en/releases-0.8.5/autoscaling.html#private-cluster>
Just make sure that they are on the same network!
Just make sure that they are on the same network!
How many laptops can I add to the cluster?
How many laptops can I add to the cluster?
in theory, a lot, but we would recommend some beefier machines if possible. check out the scalability envelope for more details <https://github.com/ray-project/ray/blob/master/benchmarks/README.md>
Does all the data has to fit in the memory of the cluster?
By default, yes, but Ray has an advanced object spilling feature that can write objects to disk if it runs out of memory.
By default, yes, but Ray has an advanced object spilling feature that can write objects to disk if it runs out of memory.
Nice. Is there any way to read in distributed manner? In the notebook, all reading of the data happens in one node. If data is on a shared store like S3 or HDFS, does Ray have out of the box tools to read them to the object store?
Nice. Is there any way to read in distributed manner? In the notebook, all reading of the data happens in one node. If data is on a shared store like S3 or HDFS, does Ray have out of the box tools to read them to the object store?
Right now, Ray doesn’t have tools for that, but some of the higher level libraries do. (For example, dask-on-ray, and modin can both read from s3 out of the box). You can also bring your own tools
are we going to look how to distribute a dataframe?
AFAIK we won't be covering it in this tutorial, there are two solutions available: <https://docs.ray.io/en/master/dask-on-ray.html> and <https://docs.ray.io/en/master/modin/index.html> (see also <https://github.com/modin-project/modin>)
Is there a general Ray Slack channel for ongoing support and questions?
<http://ray-distributed.slack.com|ray-distributed.slack.com>
<http://ray-distributed.slack.com|ray-distributed.slack.com>
restricted to berkeley and anyscale
restricted to berkeley and anyscale
also there is <https://discuss.ray.io/> which has better search/indexing and is recommended for longer questions :slightly_smiling_face: for the slack you need to fill out <https://forms.gle/9TSdDYUgxYs8SA9e8> to be invited
If there is a memory leak, how would the error look like?
If you run out of memory, the Ray task will raise an error, which will show up on the driver and is also printed to the log files. You can automatically deal with memory leaks that you can't/don't want to fix by setting `max_calls=1` on the task to restart workers after each call.
If you run out of memory, the Ray task will raise an error, which will show up on the driver and is also printed to the log files. You can automatically deal with memory leaks that you can't/don't want to fix by setting `max_calls=1` on the task to restart workers after each call.
Ah OK. Thanks <@U026430BG1W>
Yes <@U025X6U2XL6> - the videos will be on demand after the end of the day
Great. What is license/permission for them?
Great. What is license/permission for them?
You’ll be able to watch them, but not download them.
Is Ray useful to run on a single Workstation with a single GPU and multicore CPU?
I think it can be very useful for prototyping and proof of concept, based on my experience to date with Tune and RLLib. These incorporate some of the multiprocessing and distribution features of Ray core.
Thanks for the great questions during the morning tutorial about Ray core
Excellent presentation and thanks. I am using Ray Tune with RLLib. It appears I have access to some parallelization features through these two packages. What can't I do WRT to cluster/parallel processing?
Excellent presentation and thanks. I am using Ray Tune with RLLib. It appears I have access to some parallelization features through these two packages. What can't I do WRT to cluster/parallel processing?
Many thanks David, and great question here – For both RLlib and Tune, those are already going to be leveraging features for parallelization. There may be some configuration required to make the best use of cluster resources, e.g., are there GPUs available, what kind, etc.
The cluster is starting up. The terminal will be available after the cluster is active.
Do you still have the problem Simon?
Do you still have the problem Simon?
Hi ... yes i do. My cluster seems to be in a suspended state <https://anyscale-dev.dev/o/raysummit2021/projects/prj_9RTMrFJQUhNfqz6TJiyyuTKA/app-config-details/bld_Q8rhXSuGQMdxR8ntU46FNH83>
Hi ... yes i do. My cluster seems to be in a suspended state <https://anyscale-dev.dev/o/raysummit2021/projects/prj_9RTMrFJQUhNfqz6TJiyyuTKA/app-config-details/bld_Q8rhXSuGQMdxR8ntU46FNH83>
What is your username?
What is your username?
It looks to be active now? <https://anyscale-dev.dev/o/raysummit2021/projects/prj_9RTMrFJQUhNfqz6TJiyyuTKA/clusters/ses_zhdwMD493NShc3T7J5GpGeaa>
It looks to be active now? <https://anyscale-dev.dev/o/raysummit2021/projects/prj_9RTMrFJQUhNfqz6TJiyyuTKA/clusters/ses_zhdwMD493NShc3T7J5GpGeaa>
it has been reporting for the last 10 minutes that the cluster is starting up or even longer
it has been reporting for the last 10 minutes that the cluster is starting up or even longer
Got it! Yes, I’m seeing that too. It is working now.
Got it! Yes, I’m seeing that too. It is working now.
thanks for your help
thanks for your help
No problem, hope everything will be smooth afterwards!
I'm following the tutorial 24 hours late, now. The Jupyter, Dashboard, and TensorBoard links don't work - I guess something was turned off on the platform. Any chance someone can turn it on please?
cc <@U01VD58RVLK> <@U024C17HKQW>
cc <@U01VD58RVLK> <@U024C17HKQW>
<@U026E8WB1BN> - the materials were only available during the live tutorial and are no longer available. We will be hosting more tutorials in the future, so you’ll have another chance to participate. You can find the info on those events (once it’s available) on Anyscale’s events page: <https://www.anyscale.com/events> We’re sorry for any inconvenience this has caused
<@U026E8WB1BN> - the materials were only available during the live tutorial and are no longer available. We will be hosting more tutorials in the future, so you’ll have another chance to participate. You can find the info on those events (once it’s available) on Anyscale’s events page: <https://www.anyscale.com/events> We’re sorry for any inconvenience this has caused
OK. Thanks!
Hi <@U01VD58RVLK> can I get access to the slides of core tutorial. Paco mentioned they will be available.
I believe you should be able to access them here: <https://github.com/DerwenAI/ray_tutorial>
I believe you should be able to access them here: <https://github.com/DerwenAI/ray_tutorial>
hello <@U01VD58RVLK> thank you so much! The slides have links disabled, but that's okay. By the way, I"m looking for a Github link for the second tutorial as well involving RLlib and Ray Tune
hello <@U01VD58RVLK> thank you so much! The slides have links disabled, but that's okay. By the way, I"m looking for a Github link for the second tutorial as well involving RLlib and Ray Tune
Hi <@U026AF8Q9BN>, which links are disabled in the <https://github.com/DerwenAI/ray_tutorial/blob/main/slides.pdf> slide deck? There should not be any, so I'd really to fix those! :)
Hi <@U026AF8Q9BN>, which links are disabled in the <https://github.com/DerwenAI/ray_tutorial/blob/main/slides.pdf> slide deck? There should not be any, so I'd really to fix those! :)
very kind of you to respond <@U025PMJJ9GX> I realized that the links work fine after downloading. Viewing the pdf directly from Github seems to disable links. There's a wealth of knowledge in these slides and links, so didn't want to miss out anything :slightly_smiling_face:
very kind of you to respond <@U025PMJJ9GX> I realized that the links work fine after downloading. Viewing the pdf directly from Github seems to disable links. There's a wealth of knowledge in these slides and links, so didn't want to miss out anything :slightly_smiling_face:
Good to hear :slightly_smiling_face: Yes, the GitHub rendering breaks PDF in some ways. At Derwen, we did a separate kind of SWA preso viewer in Flask/CloudFlare, and I could barely believe how very strange parsing PDFs can get...
Good to hear :slightly_smiling_face: Yes, the GitHub rendering breaks PDF in some ways. At Derwen, we did a separate kind of SWA preso viewer in Flask/CloudFlare, and I could barely believe how very strange parsing PDFs can get...
can't fathom myself either :)
&gt; so does Ray also have a work-stealing type of task execution system?
and if so, how is that managed on K8s new pod for every “task”
and if so, how is that managed on K8s new pod for every “task”
no, i think the standard usage for ray is to start ray "nodes" within a pod, and ray will execute tasks within the pod
no, i think the standard usage for ray is to start ray "nodes" within a pod, and ray will execute tasks within the pod
cool and no need of gang scheduling? and the number of nodes is pre-determined or dynamic?
cool and no need of gang scheduling? and the number of nodes is pre-determined or dynamic?
nodes can be dynamic; no need to gangschedule <https://docs.ray.io/en/master/cluster/kubernetes.html>
nodes can be dynamic; no need to gangschedule <https://docs.ray.io/en/master/cluster/kubernetes.html>
and the code is pickled i assume?
and the code is pickled i assume?
yep
Ben Han let us sync up if you want to work on this, I can help you. Are you comfortable with golang/python?
yeah, both work for me. I think I have a simulator we can use as a target application for Ray over Flyte I'll throw something on your calendar
:wave: , I'm Bill, PM @ Anyscale. Happy to see progress on this integration - is kubernetes a must-have for you all?
any problems with K8s?
any problems with K8s?
There shouldn't be. As I mentioned, we've got users running it in production. I think we just need to know more about the behavior you would want out of the system to be able to answer that question better.
Kevin Su Do we need to deploy the KuberayOperator and RayCluster in separate Namespace or it needs to be deployed in the same namespace where the Flyte components are running? <https://blog.flyte.org/ray-and-flyte> Here they have given a link to Github page where they have mentioned name space as *ray-system* in README file. Does it matters?
No matter what namespace Rayoperator is in, Raycluster can be created in any namespace . if you run a ray task in `flytesnack-development` , then RayCluster will be launched in `flytesnack-development`
hi, can we run ray tune experiments in flyte. bcause i am getting error while executing. ```import typing import ray from ray import tune from flytekit import Resources, task, workflow from flytekitplugins.ray import HeadNodeConfig, RayJobConfig, WorkerNodeConfig @ray.remote def objective(config): return (config["x"] * config["x"]) ray_config = RayJobConfig( head_node_config=HeadNodeConfig(ray_start_params={"log-color": "True"}), worker_node_config=[WorkerNodeConfig(group_name="ray-group", replicas=2)], runtime_env={"pip": ["numpy", "pandas"]}, ) @task(task_config=ray_config, limits=Resources(mem="2000Mi", cpu="1")) def ray_task(n: int) -&gt; int: model_params = { "x": tune.randint(-10, 10) } tuner = tune.Tuner( objective, tune_config=tune.TuneConfig( num_samples=10, max_concurrent_trials=n, ), param_space=model_params, ) results = tuner.fit() return results @workflow def ray_workflow(n: int) -&gt; int: return ray_task(n=n)``` is there any other ways to run hyperparameter tuning in a distributed manner like ray tune?
This should be possible- seems like something is wrong in pickling Can you file this as a bug Also not sure why this is failing - the error is non descriptive If you drop the distributed part (drop ray config) does it work Or does it work locally
This should be possible- seems like something is wrong in pickling Can you file this as a bug Also not sure why this is failing - the error is non descriptive If you drop the distributed part (drop ray config) does it work Or does it work locally
The above error mentioned is got when i ran locally using '*python example.py'*. when i executed using pyflyte run command i am getting this error. after removing the distributed config also am getting same error.
The above error mentioned is got when i ran locally using '*python example.py'*. when i executed using pyflyte run command i am getting this error. after removing the distributed config also am getting same error.
yes, ray tune should work, it could test it locally first. Did you use ray 2.0? I got the same error, but fixed it by upgrading ray. btw, the type of `result` isn’t int. it’s `ResultGrid`.
yes, ray tune should work, it could test it locally first. Did you use ray 2.0? I got the same error, but fixed it by upgrading ray. btw, the type of `result` isn’t int. it’s `ResultGrid`.
What is the method to specify the result type as `ResultGrid`. ```@workflow def ray_workflow(n: int) -&gt; ResultGrid: return ray_task(n=n)``` Is this the way?
What is the method to specify the result type as `ResultGrid`. ```@workflow def ray_workflow(n: int) -&gt; ResultGrid: return ray_task(n=n)``` Is this the way?
yes, flytekit will serialize it to pickle by default, but you could register new type transformer to serialize it to protobuf. <https://docs.flyte.org/projects/cookbook/en/latest/auto/core/extend_flyte/custom_types.html>
yes, flytekit will serialize it to pickle by default, but you could register new type transformer to serialize it to protobuf. <https://docs.flyte.org/projects/cookbook/en/latest/auto/core/extend_flyte/custom_types.html>
have upgraded the ray version. it is 2.1.0 now. but still i am getting this error when i mention the result type as ResultGrid. is it compulsory to register new type transformer? is the error caused bcause of that? now the ray cluster is getting initiated but after that getting error. For demo purpose I just returned the length of the previous msg. ray instance is getting initiated. `AttributeError: 'NoneType' object has no attribute 'encode'` `ray.tune.error.TuneError: The Ray Tune run failed. Please inspect the previous error messages for a cause. After fixing the issue, you can restart the run from scratch or continue this run.` ```import ray from ray import tune, air from ray.air import Result from ray.tune import ResultGrid from flytekit import Resources, task, workflow from flytekitplugins.ray import HeadNodeConfig, RayJobConfig, WorkerNodeConfig @ray.remote def objective(config): return (config["x"]+2) ray_config = RayJobConfig( head_node_config=HeadNodeConfig(ray_start_params={"log-color": "True"}), worker_node_config=[WorkerNodeConfig(group_name="ray-group", replicas=2)], runtime_env={"pip": ["numpy", "pandas"]}, ) @task(task_config=ray_config, limits=Resources(mem="2000Mi", cpu="1")) def ray_task() -&gt; int: model_params = { "x": tune.randint(-10, 10) } tuner = tune.Tuner( objective, tune_config=tune.TuneConfig( num_samples=10, max_concurrent_trials=2, ), param_space=model_params, ) result_grid = tuner.fit() return len(result_grid) @workflow def ray_workflow() -&gt; int: return ray_task()```
have upgraded the ray version. it is 2.1.0 now. but still i am getting this error when i mention the result type as ResultGrid. is it compulsory to register new type transformer? is the error caused bcause of that? now the ray cluster is getting initiated but after that getting error. For demo purpose I just returned the length of the previous msg. ray instance is getting initiated. `AttributeError: 'NoneType' object has no attribute 'encode'` `ray.tune.error.TuneError: The Ray Tune run failed. Please inspect the previous error messages for a cause. After fixing the issue, you can restart the run from scratch or continue this run.` ```import ray from ray import tune, air from ray.air import Result from ray.tune import ResultGrid from flytekit import Resources, task, workflow from flytekitplugins.ray import HeadNodeConfig, RayJobConfig, WorkerNodeConfig @ray.remote def objective(config): return (config["x"]+2) ray_config = RayJobConfig( head_node_config=HeadNodeConfig(ray_start_params={"log-color": "True"}), worker_node_config=[WorkerNodeConfig(group_name="ray-group", replicas=2)], runtime_env={"pip": ["numpy", "pandas"]}, ) @task(task_config=ray_config, limits=Resources(mem="2000Mi", cpu="1")) def ray_task() -&gt; int: model_params = { "x": tune.randint(-10, 10) } tuner = tune.Tuner( objective, tune_config=tune.TuneConfig( num_samples=10, max_concurrent_trials=2, ), param_space=model_params, ) result_grid = tuner.fit() return len(result_grid) @workflow def ray_workflow() -&gt; int: return ray_task()```
So your ray run itself is failing
So your ray run itself is failing
Hey Priya . IIUC, the function (`objective`) in `tune.Tuner` should be a regular function instead of a ray remote function.
Hi .. I earlier tried using ray.init() for my previous flyte task. Now how should i override the RAY engine to use default. Even after shutting down the ray instance, . I can see Ray gets initialized automatically??
Interesting- are you saying if you have 2 tasks one ray one not, the even for second ray gets init? Do you have ray init at the module level
Interesting- are you saying if you have 2 tasks one ray one not, the even for second ray gets init? Do you have ray init at the module level
Yes.. My script doesnt have import ray even.. Not sure where I am doing wrong here ?
Yes.. My script doesnt have import ray even.. Not sure where I am doing wrong here ?
Can you share an example for us to help
Can you share an example for us to help
The moment libraries are installed, ray is getting initialized. Because of this I am getting UserWarning: The size of /dev/shm is too small (67108864 bytes). The required size at least half of RAM (33303341056 bytes). Please, delete files in /dev/shm or increase size of /dev/shm with --shm-size in Docker. Also, you can set the required memory size for each Ray worker in bytes to MODIN_MEMORY environment variable. I m actually not using RAY here
Hi Team, We are trying to run the flyte workflow for ML training using xgboost classifier but we are getting below error. Could you please help. ```Placement group creation timed out. Make sure your cluster either has enough resources or use an autoscaling cluster. Current resources available: {'CPU': 1.0, 'object_store_memory': 217143276.0, 'memory': 434412750.0, 'node:10.69.53.118': 0.98}, resources requested by the placement group: [{'CPU': 1.0}, {'CPU': 1.0}]```
Have you checked `ray status`?
Have you checked `ray status`?
Have you tried to increase task resource? ```@task(task_config=ray_cfg, request=Resources(mem="2000Mi", cpu="1"))```
hey a while back there was an RFC on ray integration that included something about support for persisting cluster resources across tasks, is that something still in progress? can someone point me to the docs?
Kevin Su so we spoke with Keshi Dai about the persistent cluster feature. to clarify, you mean that if one workflow runs two ray tasks, they both end up using the same cluster right?
Kevin Su so we spoke with Keshi Dai about the persistent cluster feature. to clarify, you mean that if one workflow runs two ray tasks, they both end up using the same cluster right?
yea to save the boot time right? and possibly skip serialization?
yea to save the boot time right? and possibly skip serialization?
to save boot time yes. serialization still happens iiuc <https://github.com/flyteorg/flyteplugins/tree/master/go/tasks/plugins/k8s/ray> is all the code right kevin?
to save boot time yes. serialization still happens iiuc <https://github.com/flyteorg/flyteplugins/tree/master/go/tasks/plugins/k8s/ray> is all the code right kevin?
yup
yup
do you have any more info on the persistent clusters? we could potentially use it to speed up our end to end workflows by quite a bit :slightly_smiling_face: