input
stringlengths 1
18.7k
| output
stringlengths 1
18.7k
|
---|---|
Keshi Dai here are the meeting notes - <https://docs.google.com/document/d/1z6rAPlg8O-N6Bm3NxcHZgwaEDJMrr2PBBJh1uPqN5-w/edit#>
cc Haytham Abuelfutuh / Daniel Rammer
| Awesome! Thanks Ketan!
|
Keshi Dai updated the doc with the things that we discussed yesterday. left one place for you to add some examples.
| Thank you! Really appreciate it! I’ll give a read today and add examples there
Yee Ketan Umare do you mind if I also share this doc with Shopify folks? They might be also interested. cc Isaac Vidas
|
Thank you! Really appreciate it! I’ll give a read today and add examples there
Yee Ketan Umare do you mind if I also share this doc with Shopify folks? They might be also interested. cc Isaac Vidas
| oh please do. i think you should have editor (and share) perms
|
oh please do. i think you should have editor (and share) perms
| Yee I think the doc needs to be reordered
why is phase 2 before phase 1?
|
Yee I think the doc needs to be reordered
why is phase 2 before phase 1?
| BTW, are you guys interested in presenting this in next KubeRay community meeting? cc Jiaxin Shan
|
BTW, are you guys interested in presenting this in next KubeRay community meeting? cc Jiaxin Shan
| when is that?
i think we should… but only if we’re ready to.
and if not the next one, do you think we could do the one after that?
i’m curious to see how these other community syncs are run too
|
when is that?
i think we should… but only if we’re ready to.
and if not the next one, do you think we could do the one after that?
i’m curious to see how these other community syncs are run too
| The next sync will be July 12 - it runs every two week on Tuesday night 6PM Western time
|
The next sync will be July 12 - it runs every two week on Tuesday night 6PM Western time
| we should be ready by then yeah
we’ll work together to present?
not sure if it’ll be me or kevin or ketan but one of us from our side
|
we should be ready by then yeah
we’ll work together to present?
not sure if it’ll be me or kevin or ketan but one of us from our side
| Sure I can give a bit context on how we will use this at Spotify but I’ll leave you guys with the design doc
|
Sure I can give a bit context on how we will use this at Spotify but I’ll leave you guys with the design doc
| this is great
lets present
and if possible also demo ?
|
Ketan Umare Yee regarding the KubeRay community demo, it seems tomorrow’s agenda will be pretty packed according to Jiaxin Shan for KubeRay 0.3 release. Are you guys OK with moving to the one on July 26? Jiaxin Shan could you please help confirm that there’ll be a slot available for the sync on July 26?
| +1
Yeah 26th sounds perfect.
|
Keshi Dai Jiaxin Shan - so we’re on for the 26th right? we’re gonna start work on the presentation tomorrow.
could you tell us how long the slot is?
and the details for the meeting? is it a zoom meeting?
also is there a ray slack we should be on?
cc Eduardo Apolinario and Kevin Su
| The meeting is 30mins long. If you can control the demo in 10-15mins. That would be great. I will send the google meeting link later
|
The meeting is 30mins long. If you can control the demo in 10-15mins. That would be great. I will send the google meeting link later
| Keshi Dai how much of the presentation do you want in on?
so the whole meeting is 30 mins? so 15 mins for presentation and 15 for questions/discussion?
or do we have to fit everything into 15 mins? and the other 15 is for someone else?
|
Keshi Dai how much of the presentation do you want in on?
so the whole meeting is 30 mins? so 15 mins for presentation and 15 for questions/discussion?
or do we have to fit everything into 15 mins? and the other 15 is for someone else?
| I think 15mins presentation and 5mins QA would be great
|
I think 15mins presentation and 5mins QA would be great
| beautiful, thank you
|
beautiful, thank you
| Yee I don’t think I have much to present here. I’ll just cheer for your guys and I can chime in and say a few words on how we will use it at Spotify
|
Yee I don’t think I have much to present here. I’ll just cheer for your guys and I can chime in and say a few words on how we will use it at Spotify
| Jiaxin Shan are any of the prior talks on youtube? just want to see an example.
|
Jiaxin Shan are any of the prior talks on youtube? just want to see an example.
| Yee I don’t think we have recorded meeting and uploaded to youtube. It’s pretty flexible, you can share docs etc, no strict rules on the styles and materials
|
Great job Yee : Kevin Su and thank you Keshi Dai
I actually think we can release ray integration in alpha as a patch release as it needs no platform changes
| This is really awesome work! Thank you all!
|
This is really awesome work! Thank you all!
| We finally did it - also wouldn't be possible without Jiaxin Shan
|
We finally did it - also wouldn't be possible without Jiaxin Shan
| nice :+1:
|
Keshi Dai - these are the Ray PRs we’re working on.
<https://github.com/flyteorg/flyteidl/pull/308>
<https://github.com/flyteorg/flytekit/pull/1093>
<https://github.com/flyteorg/flyteplugins/pull/279>
responding to your comment on the first one, but mind taking a quick look at the other two as well?
Kevin Su’s shooting to merge these tomorrow, assuming they’re okay. After that, I think we’re actually going to do a patch Flyte release (like the whole platform, since propeller and admin will need to pick up the new IDL)
do you think you’ll be able to help test/vet after that?
kevin’s still working on getting the dashboard hooked up, but you had mentioned something about metrics as well that you were interested in. will think about ways to enable those as well (though can’t promise anything just yet)
| Thanks! Yeah, I started looking into these PRs
Hey Yee, I drop a few comments in the PRs. They are mostly around the flexibility of the Ray cluster configuration. Essentially we will need more settings such as service account and image in Ray cluster config
> do you think you’ll be able to help test/vet after that?
yep, I’m happy to dogfood it!
|
Thanks! Yeah, I started looking into these PRs
Hey Yee, I drop a few comments in the PRs. They are mostly around the flexibility of the Ray cluster configuration. Essentially we will need more settings such as service account and image in Ray cluster config
> do you think you’ll be able to help test/vet after that?
yep, I’m happy to dogfood it!
| Keshi Dai can we use the same serviceaccount as the workflow service account?
|
Keshi Dai can we use the same serviceaccount as the workflow service account?
| You mean the service account for flyte workflow? I think they are different
|
You mean the service account for flyte workflow? I think they are different
| why?
they are k8s serviceaccounts
|
why?
they are k8s serviceaccounts
| but they are run on different clusters
|
but they are run on different clusters
| no? we are running a K8sCRD
they are all on the same cluster and even the same namespace
|
no? we are running a K8sCRD
they are all on the same cluster and even the same namespace
| that wont work for us though - flyte workflows are executed in their own infra and in a flyte workflow, we need to create a cluster in the Ray infra
so when you create the Ray cluster, do you assume you have Ray CRD installed on the flyte cluster?
|
that wont work for us though - flyte workflows are executed in their own infra and in a flyte workflow, we need to create a cluster in the Ray infra
so when you create the Ray cluster, do you assume you have Ray CRD installed on the flyte cluster?
| thats the current setup right - cc Kevin Su?
|
thats the current setup right - cc Kevin Su?
| we might need to sync on this - in our setup, Ray and Flyte are run on different clusters. KubeRay also has an API server, I wonder if that will make things easier :thinking_face:
|
we might need to sync on this - in our setup, Ray and Flyte are run on different clusters. KubeRay also has an API server, I wonder if that will make things easier :thinking_face:
| so Flyte does support running on different clusters
but my recommendation is to run just flyte in multi-cluster mode
this is so much better than using the same cluster
that way, data stuff cannot impact ML stuff
we actually ran 13 clusters at Lyft
|
so Flyte does support running on different clusters
but my recommendation is to run just flyte in multi-cluster mode
this is so much better than using the same cluster
that way, data stuff cannot impact ML stuff
we actually ran 13 clusters at Lyft
| yes exactly! we have dedicated flyte cluster for orchestration, some teams using flink cluster for data, and now we have ray clusters
|
yes exactly! we have dedicated flyte cluster for orchestration, some teams using flink cluster for data, and now we have ray clusters
| ya
but a better model is to run flytepropeller on different clusters
keep the central control plane independent and just parachute flytepropeller to multiple clusters
and it can connect back,
|
ya
but a better model is to run flytepropeller on different clusters
keep the central control plane independent and just parachute flytepropeller to multiple clusters
and it can connect back,
| Gotcha!
|
Gotcha!
| otherwise the one propeller will start getting overloaded at (very high scale), but it will
and one good property of having flytepropeller manage your cluster is, it will know the scheduler how the resources are getting throttled and work correctly
|
otherwise the one propeller will start getting overloaded at (very high scale), but it will
and one good property of having flytepropeller manage your cluster is, it will know the scheduler how the resources are getting throttled and work correctly
| > thats the current setup right - cc
yes, we have to install Ray CRD on flyte cluster
|
Discussion: In Spotify’s setup, Ray and Flyte are run on different clusters. KubeRay also has an API server, and they want to use Propeller to send a http request to Ray api server to create ray job. Therefore, we may need to add a ray webAPI plugin in propeller.
Start a new thread to discuss this. Is there any other better way to handle this?
cc Yee Ketan Umare Keshi Dai
| yeah Keshi Dai i know you brought this up in the past, but is there any chance you can add flyte to the ray cluster?
|
yeah Keshi Dai i know you brought this up in the past, but is there any chance you can add flyte to the ray cluster?
| hmm, does that mean we need to install Propeller on our ray cluster? do you mind elaborating more on this? thank you!
|
hmm, does that mean we need to install Propeller on our ray cluster? do you mind elaborating more on this? thank you!
| yeah… though it’s not possible to scan the workflow right now is it Katrina Rogan do you know?
like if a workflow has a certain type of task, then it should go to a certain cluster, was that in the feature anand worked on?
|
yeah… though it’s not possible to scan the workflow right now is it Katrina Rogan do you know?
like if a workflow has a certain type of task, then it should go to a certain cluster, was that in the feature anand worked on?
| are you asking how do we assign specific executions to specific flytepropeller clusters?
|
are you asking how do we assign specific executions to specific flytepropeller clusters?
| yeah
|
yeah
| we don't have anything that's workflow-aware, right now it's just a static mapping: <https://docs.flyte.org/en/latest/deployment/multicluster.html#user-and-control-plane-deployment>
|
we don't have anything that's workflow-aware, right now it's just a static mapping: <https://docs.flyte.org/en/latest/deployment/multicluster.html#user-and-control-plane-deployment>
| thanks katrina. so yeah never mind Keshi Dai for the current plugin to be useful, the kuberay operator will need to be installed on the flyte cluster. is this not possible?
|
thanks katrina. so yeah never mind Keshi Dai for the current plugin to be useful, the kuberay operator will need to be installed on the flyte cluster. is this not possible?
| yeah it’s not working with our current setup since Ray and Flyte are owned by different teams and operated on different GKE clusters.
|
yeah it’s not working with our current setup since Ray and Flyte are owned by different teams and operated on different GKE clusters.
| and there’s no way to add the ray operator to the flyte cluster?
|
and there’s no way to add the ray operator to the flyte cluster?
| Can’t we load the creeds for the ray cluster and submit the job resource there?
|
Can’t we load the creeds for the ray cluster and submit the job resource there?
| Babis Kiosidis we can, but this causes a lot or problems, you folks did it for FlinkOperator if you remember and should work with Ray
But our recommended method is to just install flytepropeller on all k8s clusters. Think of propeller as integral to K8s
|
Babis Kiosidis we can, but this causes a lot or problems, you folks did it for FlinkOperator if you remember and should work with Ray
But our recommended method is to just install flytepropeller on all k8s clusters. Think of propeller as integral to K8s
| yeah i am asking because we already work in this way with a flink task and the FlinkOperator which is running on a different GKE cluster (more resources available for massive jobs)
would installing propeller to the ray-cluster automatically forward the ray tasks towards that propeller?
|
Niels Bantilan Samhita Alla mind reviewing and either closing or pushing these to the next milestone?
<https://github.com/flyteorg/flyte/issues?q=is%3Aopen+is%3Aissue+author%3ASandraGH5+milestone%3A0.15.0>
| Closed all issues except one.
Niels Bantilan <https://docs.flyte.org/en/latest/getting_started_iterate.html#gettingstarted-iterate>: In this, the next page has to be "User Guide", correct? But it's "Core Concepts" now.
I'm not sure if this is to be closed. It's your call.
|
Closed all issues except one.
Niels Bantilan <https://docs.flyte.org/en/latest/getting_started_iterate.html#gettingstarted-iterate>: In this, the next page has to be "User Guide", correct? But it's "Core Concepts" now.
I'm not sure if this is to be closed. It's your call.
| that’s part of a larger issue of “next”/“previous” buttons that are out of order since we use multiple docs sites and stitch it together with RTD sub-projects
|
that’s part of a larger issue of “next”/“previous” buttons that are out of order since we use multiple docs sites and stitch it together with RTD sub-projects
| Thank you guys. I moved the remaining issue: <https://github.com/flyteorg/flyte/issues/969> to the next milestone
|
Gleb Kanterov mind reviewing and either closing or pushing these to the next milestone? <https://github.com/flyteorg/flyte/issues?q=assignee%3Akanterov+is%3Aopen+milestone%3A0.15.0>
| Sorry busy today, pinging Nelson Arapé
|
Sorry busy today, pinging Nelson Arapé
| Taking a look
|
Taking a look
| :pray:
|
:pray:
| Haytham Abuelfutuh I pushed to 0.16 to unblock you, but i will talk to Gleb Kanterov to move it to a more appropriate milestone
|
Haytham Abuelfutuh I pushed to 0.16 to unblock you, but i will talk to Gleb Kanterov to move it to a more appropriate milestone
| :bow:
is this ready/done? <https://github.com/flyteorg/flyte/issues/997>
|
:bow:
is this ready/done? <https://github.com/flyteorg/flyte/issues/997>
| no
we only publish snapshots
branch nodes and sub-workflows are ready
|
Ketan Umare mind reviewing these?
<https://github.com/flyteorg/flyte/issues/654>
<https://github.com/flyteorg/flyte/issues/590>
Prafulla Mahindrakar mind reviewing these too:
<https://github.com/flyteorg/flyte/issues/392>
<@U024KQ3GL59> mind reviewing this?
<https://github.com/flyteorg/flyte/issues/1101>
| Sure Haytham Abuelfutuh
|
Sure Haytham Abuelfutuh
| Prafulla Mahindrakar we should support creating a new launchplan, I dont htink we should close this
|
Prafulla Mahindrakar we should support creating a new launchplan, I dont htink we should close this
| It’s still in progress. <https://github.com/flyteorg/flyteadmin/pull/219> is the most recent development.
|
It’s still in progress. <https://github.com/flyteorg/flyteadmin/pull/219> is the most recent development.
| np... I moved it to the next milestone... I was just trying to close the current milestone... thank you!
|
PR for the new Graph UX
<https://github.com/flyteorg/flyteconsole/pull/176>
| Gleb Kanterov / Nelson Arapé who had a few questions on this. Also Sören Brunk / Anand Swaminathan / Anmol Khurana / varsha Parthasarathy / Miguel Toledo
|
Gleb Kanterov / Nelson Arapé who had a few questions on this. Also Sören Brunk / Anand Swaminathan / Anmol Khurana / varsha Parthasarathy / Miguel Toledo
| Wow
|
so it should be possible to pickle and run tasks directly from notebook onto a flyte-server
| that's epic
I'm imagining map tasks from within notebook would be incredible
|
that's epic
I'm imagining map tasks from within notebook would be incredible
| yup
exactly
also distributed training
and others
so i think we should just rewrite this resolverrr <https://github.com/flyteorg/flytekit/blob/master/flytekit/extras/cloud_pickle_resolver.py>
or use this resolver to use in fast-execute
and update pyflyte-fast-execute to use cloudpickle as an option
and then it should just work IMO?
Also Greg Gydush have you seen the new map-tasks
support for multiple-lists etc and partials?
|
yup
exactly
also distributed training
and others
so i think we should just rewrite this resolverrr <https://github.com/flyteorg/flytekit/blob/master/flytekit/extras/cloud_pickle_resolver.py>
or use this resolver to use in fast-execute
and update pyflyte-fast-execute to use cloudpickle as an option
and then it should just work IMO?
Also Greg Gydush have you seen the new map-tasks
support for multiple-lists etc and partials?
| I haven't yet
|
I haven't yet
| this is in beta, some small bugs being fixed, but keep an eye
<https://github.com/flyteorg/flytekit/pull/1556>
|
Got it, first single node multi GPU :+1: I can implement this on Friday or Saturday and then ping you for review. Or do you need it before that?
| Hmm that should be ok. What I want to do is train alpaca on Flyte and have that as a demo
Actually if you open a PR directly on flytekit I can hack too
Else I can copy paste hack and open PR
Let me do it for single machine and you can make it work for distributed
|
Hmm that should be ok. What I want to do is train alpaca on Flyte and have that as a demo
Actually if you open a PR directly on flytekit I can hack too
Else I can copy paste hack and open PR
Let me do it for single machine and you can make it work for distributed
| Can you give me permissions to open a PR in flytekit please?
Then I’ll push there
Or feel free to just copy, whichever is easier for you :slightly_smiling_face:
|
Can you give me permissions to open a PR in flytekit please?
Then I’ll push there
Or feel free to just copy, whichever is easier for you :slightly_smiling_face:
| Ohh you don’t have perms
Can I give you
I can send it
Wait you should get it in 2 minutes
|
Ohh you don’t have perms
Can I give you
I can send it
Wait you should get it in 2 minutes
| Ok, branch is ready to push :slightly_smiling_face:
thx
|
Ok, branch is ready to push :slightly_smiling_face:
thx
| Ok you should have it
|
Ok you should have it
| <https://github.com/flyteorg/flytekit/pull/1583>
Cool thanks :slightly_smiling_face:
I closed the other PR from the fork
Feel free to also hack/commit on this branch
|
<https://github.com/flyteorg/flytekit/pull/1583>
Cool thanks :slightly_smiling_face:
I closed the other PR from the fork
Feel free to also hack/commit on this branch
| Perfect
|
Perfect
| This is going to be awesome
We currently beat ignite into launching the local process group instead of torchrun.
Looking very forward to throw that logic out
Ketan Umare I pushed a few commits to the wip branch. Cleanup + docstrings.
Also working on making it work in a distributed way now.
|
This is going to be awesome
We currently beat ignite into launching the local process group instead of torchrun.
Looking very forward to throw that logic out
Ketan Umare I pushed a few commits to the wip branch. Cleanup + docstrings.
Also working on making it work in a distributed way now.
| Yup, I pushed some
Commits too
If you had seen
This is looking great
|
Yup, I pushed some
Commits too
If you had seen
This is looking great
| Saw them :+1:
|
Saw them :+1:
| I will try to get alpaca working on it too
Then we can test
On a side note I also got tasks working from a jupyter notebook -
That way you can train large models directly from an interactive environment
|
I will try to get alpaca working on it too
Then we can test
On a side note I also got tasks working from a jupyter notebook -
That way you can train large models directly from an interactive environment
| You mean write task in notebook and then just run task from there?
|
You mean write task in notebook and then just run task from there?
| Yup
No need to have it in a phythojnscript
Finally you will have to copy
|
Yup
No need to have it in a phythojnscript
Finally you will have to copy
| I’m not much of a notebook user ^^ But I guess for many data scientists this is a killer feature
|
I’m not much of a notebook user ^^ But I guess for many data scientists this is a killer feature
| Ya that’s my hope
Mee too
|
Ya that’s my hope
Mee too
| I have a question about how to select the plugin for the task type. I have this:
```class MultiNodePytorchElasticFunctionTask(PythonFunctionTask[Elastic]):
_ELASTIC_TASK_TYPE = "torch-elastic"
def __init__(self, task_config: Elastic, task_function: Callable, **kwargs):
super(MultiNodePytorchElasticFunctionTask, self).__init__(
task_type=self._ELASTIC_TASK_TYPE,
**kwargs,
def get_custom(...): ...```
I also added this to helm values:
``` enabled_plugins:
tasks:
task-plugins:
enabled-plugins:
- ...
- pytorch
default-for-task-types:
- ...
pytorch: pytorch
torch-elastic: pytorch```
Propeller says:
```{"json":{"exec_id":"f1c678faf0fd74fad828","node":"n0","ns":"flytesnacks-development","res_ver":"23058","routine":"worker-2","tasktype":"torch-elastic","wf":"flytesnacks:development:<http://wf.wf|wf.wf>"},"level":"warning","msg":"No plugin found for Handler-type [torch-elastic], defaulting to [container]","ts":"2023-04-08T22:26:30Z"}```
Do I need to configure this somewhere else as well?
The existing pytorch plugin in flyteplugins just needs an additional if else whether to configure an <https://github.com/kubeflow/training-operator/blob/b2ee1cb380b94004798b44ca32a14de3bddc675f/pkg/apis/kubeflow.org/v1/pytorch_types.go#L90|ElasticPolicy>.
|
I have a question about how to select the plugin for the task type. I have this:
```class MultiNodePytorchElasticFunctionTask(PythonFunctionTask[Elastic]):
_ELASTIC_TASK_TYPE = "torch-elastic"
def __init__(self, task_config: Elastic, task_function: Callable, **kwargs):
super(MultiNodePytorchElasticFunctionTask, self).__init__(
task_type=self._ELASTIC_TASK_TYPE,
**kwargs,
def get_custom(...): ...```
I also added this to helm values:
``` enabled_plugins:
tasks:
task-plugins:
enabled-plugins:
- ...
- pytorch
default-for-task-types:
- ...
pytorch: pytorch
torch-elastic: pytorch```
Propeller says:
```{"json":{"exec_id":"f1c678faf0fd74fad828","node":"n0","ns":"flytesnacks-development","res_ver":"23058","routine":"worker-2","tasktype":"torch-elastic","wf":"flytesnacks:development:<http://wf.wf|wf.wf>"},"level":"warning","msg":"No plugin found for Handler-type [torch-elastic], defaulting to [container]","ts":"2023-04-08T22:26:30Z"}```
Do I need to configure this somewhere else as well?
The existing pytorch plugin in flyteplugins just needs an additional if else whether to configure an <https://github.com/kubeflow/training-operator/blob/b2ee1cb380b94004798b44ca32a14de3bddc675f/pkg/apis/kubeflow.org/v1/pytorch_types.go#L90|ElasticPolicy>.
| AFK
i think your config looks right
Fabio Grätz when you get a chance check the first few log lines if you start flytepropeller
i think this config looks ok
Fabio Grätz quick question, we should not need `standalone` / single node pytorch operator right?
we should automatically adapt?
what if, we add a check in TorchElasticConstructor and change the task-type
`if num replicas is 1` then the plugin type is `torch-elastic-standalone` else it is `torch-elastic` and the backend config is set of `torch-elastic` to use `pytorch-operator`?
|
AFK
i think your config looks right
Fabio Grätz when you get a chance check the first few log lines if you start flytepropeller
i think this config looks ok
Fabio Grätz quick question, we should not need `standalone` / single node pytorch operator right?
we should automatically adapt?
what if, we add a check in TorchElasticConstructor and change the task-type
`if num replicas is 1` then the plugin type is `torch-elastic-standalone` else it is `torch-elastic` and the backend config is set of `torch-elastic` to use `pytorch-operator`?
| I was thinking exactly the same.
I feel like this should go into the existing pytorch plugin, not new ones, since also for the kubeflow training operator vanilla torch distributed training and torch elastic training only differs by the elastic config in the pytorchjob manifest. Same k8s kind though.
This stays the same for backwards compatibility of course
`pip install flytekitplugins-kfpytorch`
```from flytekitplugins.kfpytorch import Pytorch
@task(
task_config=Pytorch(...)
)```
But people could to `pip install flytekitplugins-kfpytorch[elastic]` (for the torch dependency) and then:
```from flytekitplugins.kfpytorch import ElasticPytorch
@task(
task_config=ElasticPytorch(nnodes=1) # single pod, no operator
)
@task(
task_config=ElasticPytorch(nnodes=2) # pytorch operator
)```
And in flyteplugins all the pytorch code can be reused as well, just an if whether we need to set elastic config in pytorchjob.
Already works:
```class PytorchElasticFunctionTask(PythonFunctionTask[Elastic]):
_ELASTIC_TASK_TYPE = "pytorch"
_ELASTIC_TASK_TYPE_STANDALONE = "container"
def __init__(self, task_config: Elastic, task_function: Callable, **kwargs):
task_type = self._ELASTIC_TASK_TYPE_STANDALONE if task_config.nnodes == 1 else self._ELASTIC_TASK_TYPE
super(PytorchElasticFunctionTask, self).__init__(
task_config=task_config,
task_type=task_type,
...
def get_custom(self, settings: SerializationSettings) -> Optional[Dict[str, Any]]:
if self.task_config.nnodes == 1:
"""
Torch elastic distributed training is executed in a normal k8s pod so that this
works without the kubeflow train operator.
"""
return super().get_custom(settings)
else:
from flytekitplugins.kfpytorch.models import PyTorchJob
job = PyTorchJob(```
```Every 2.0s: kubectl get pods -n flytesnacks-development Fabios-MacBook-Pro.local: Sun Apr 9 23:04:03 2023
NAME READY STATUS RESTARTS AGE
f91014ed8990b4c79b32-n0-0-master-0 1/1 Running 0 23s
f91014ed8990b4c79b32-n0-0-worker-0 1/1 Running 0 22s
f91014ed8990b4c79b32-n0-0-worker-1 1/1 Running 0 16s
f7e922a78842044aba46-n0-0 1/1 Running 0 7s```
Only diff between the two is `nnodes` being 1 or not.
Can you pls give me perms to make a PR in idl and plugins next week? Or shell I do from fork there?
I will work on the changes in plugins tomorrow.
Free day in Germany
:slightly_smiling_face:
|
I was thinking exactly the same.
I feel like this should go into the existing pytorch plugin, not new ones, since also for the kubeflow training operator vanilla torch distributed training and torch elastic training only differs by the elastic config in the pytorchjob manifest. Same k8s kind though.
This stays the same for backwards compatibility of course
`pip install flytekitplugins-kfpytorch`
```from flytekitplugins.kfpytorch import Pytorch
@task(
task_config=Pytorch(...)
)```
But people could to `pip install flytekitplugins-kfpytorch[elastic]` (for the torch dependency) and then:
```from flytekitplugins.kfpytorch import ElasticPytorch
@task(
task_config=ElasticPytorch(nnodes=1) # single pod, no operator
)
@task(
task_config=ElasticPytorch(nnodes=2) # pytorch operator
)```
And in flyteplugins all the pytorch code can be reused as well, just an if whether we need to set elastic config in pytorchjob.
Already works:
```class PytorchElasticFunctionTask(PythonFunctionTask[Elastic]):
_ELASTIC_TASK_TYPE = "pytorch"
_ELASTIC_TASK_TYPE_STANDALONE = "container"
def __init__(self, task_config: Elastic, task_function: Callable, **kwargs):
task_type = self._ELASTIC_TASK_TYPE_STANDALONE if task_config.nnodes == 1 else self._ELASTIC_TASK_TYPE
super(PytorchElasticFunctionTask, self).__init__(
task_config=task_config,
task_type=task_type,
...
def get_custom(self, settings: SerializationSettings) -> Optional[Dict[str, Any]]:
if self.task_config.nnodes == 1:
"""
Torch elastic distributed training is executed in a normal k8s pod so that this
works without the kubeflow train operator.
"""
return super().get_custom(settings)
else:
from flytekitplugins.kfpytorch.models import PyTorchJob
job = PyTorchJob(```
```Every 2.0s: kubectl get pods -n flytesnacks-development Fabios-MacBook-Pro.local: Sun Apr 9 23:04:03 2023
NAME READY STATUS RESTARTS AGE
f91014ed8990b4c79b32-n0-0-master-0 1/1 Running 0 23s
f91014ed8990b4c79b32-n0-0-worker-0 1/1 Running 0 22s
f91014ed8990b4c79b32-n0-0-worker-1 1/1 Running 0 16s
f7e922a78842044aba46-n0-0 1/1 Running 0 7s```
Only diff between the two is `nnodes` being 1 or not.
Can you pls give me perms to make a PR in idl and plugins next week? Or shell I do from fork there?
I will work on the changes in plugins tomorrow.
Free day in Germany
:slightly_smiling_face:
| I can give your perms
i like the idea
idl and plugins permissions added
also we can simply add the same plugin for different config types
Fabio Grätz also thought some more of
```pip install flytekitplugins-kfpytorch[elastic]```
Maybe we can simply add an import gate. if modulenot found, raise an error that torch should be installed
Also Fabio Grätz I have this repo created = <https://github.com/unionai-oss/stanford_alpaca/pull/1>
check it out
|
I can give your perms
i like the idea
idl and plugins permissions added
also we can simply add the same plugin for different config types
Fabio Grätz also thought some more of
```pip install flytekitplugins-kfpytorch[elastic]```
Maybe we can simply add an import gate. if modulenot found, raise an error that torch should be installed
Also Fabio Grätz I have this repo created = <https://github.com/unionai-oss/stanford_alpaca/pull/1>
check it out
| I saw you simplified the models, didn’t know this was possible, nice :+1:
Update from my side:
I opened a draft <https://github.com/flyteorg/flyteidl/pull/394|PR in idl> and in <https://github.com/flyteorg/flyteplugins/pull/343|plugins>. Built a propeller image, creating a distributed pytorchjob with elastic config works.
What is not working reliably yet is the rendevouz when initiating the process group.
We definitely need something similar to <https://github.com/flyteorg/flytekit/pull/1583/commits/333e6008c2b3bac2fc77a379fd2220288a2a4519|what I added here>:
``` rdzv_endpoint=os.environ.get("PET_RDZV_ENDPOINT", f"localhost:0"),```
Here, `localhost:0` means torchrun picks a free port (see <https://github.com/pytorch/pytorch/blob/537c346117967da690c9fe719e27d08ce9d43424/torch/distributed/run.py#L100|docs>).
I’m currently working on making <https://github.com/kubeflow/training-operator/tree/master/examples/pytorch/elastic/echo|this minimal elastic example> from the kubeflow training operator repo work:
```import os
import logging
from flytekit import task, workflow
from flytekitplugins.kfpytorch import PyTorch, Elastic
logging.basicConfig(level=<http://logging.INFO|logging.INFO>) # To see torchrun trying to establish the rendevouz
@task(
task_config=Elastic(
nnodes=2,
nproc_per_node=2,
start_method="fork",
)
#task_config=PyTorch(num_workers=2)
)
def train() -> str:
import io
import os
import pprint
import sys
import time
import torch.distributed as dist
env_dict = {
k: os.environ[k]
for k in (
"LOCAL_RANK",
"RANK",
"GROUP_RANK",
"WORLD_SIZE",
"MASTER_ADDR",
"MASTER_PORT",
"TORCHELASTIC_RESTART_COUNT",
"TORCHELASTIC_MAX_RESTARTS",
)
}
with io.StringIO() as buff:
print("======================================================", file=buff)
print(
f"Environment variables set by the agent on PID {os.getpid()}:", file=buff
)
pprint.pprint(env_dict, stream=buff)
print("======================================================", file=buff)
print(buff.getvalue())
sys.stdout.flush()
dist.init_process_group(backend="gloo")
dist.barrier()
rank = dist.get_rank()
print(
(
f"On PID {os.getpid()}, after init process group, "
f"rank={dist.get_rank()}, world_size = {dist.get_world_size()}\n"
)
)
dist.destroy_process_group()
return f"foo-{rank}"
@workflow
def wf():
train()
if __name__ == "__main__":
print(f"Parent {os.getpid()}")
print(wf())```
Rendevouz sometimes fails, sometimes works, currently debugging why. Just as fyi where I’m at…
|
One other thing about which I’m interested in your opinion:
`torchrun` allows the user to set `--nnodes` which could e.g. be `2` but also be `"1:2"` which means min 1 max 2. Currently this is what iour new `task_config=Elastic()` exposes as well.
The kubeflow PytorchJob allows setting `minReplicas`, `maxReplicas` (which by default are both None), and `replicas` (see <https://github.com/kubeflow/training-operator/blob/master/examples/pytorch/elastic/echo/echo.yaml|here>). In theory you could say min 2, max 4, replicas 3 (without going into how much sense this makes).
If a user specifies `2:3` we currently set min to 2 and max and replicas to 3.
To summarize: Should we expose `nnodes` like torchrun or `min_replicas`, `max_replicas`, and `replicas` like the pytorchjob to the user?
| ohh is that a question?
i like min and max
isnt it the same? but more explicit?
|
ohh is that a question?
i like min and max
isnt it the same? but more explicit?
| Currently we make the assumption that when user specifies `3:5`, we set `maxReplicas` but also `Replicas` to 5. In theory this doesn’t have to be the case in the pytorchjob manifest.
I’ll change it to the more explicit version :+1:
|
Currently we make the assumption that when user specifies `3:5`, we set `maxReplicas` but also `Replicas` to 5. In theory this doesn’t have to be the case in the pytorchjob manifest.
I’ll change it to the more explicit version :+1:
| Aah got it
|
• <https://github.com/flyteorg/flyte/issues/3614>
• <https://github.com/flyteorg/flytekit/pull/1603>
• <https://github.com/flyteorg/flyteidl/pull/394>
• <https://github.com/flyteorg/flyteplugins/pull/343>
• <https://github.com/flyteorg/flytesnacks/pull/987>
| we have to merge starting flyteidl
flytekit will be the last to merge
this allows us to change things if needed
cc Jeev B
|
Byron Hsu we are enabling torch-elastic in flytekit now
| thanks this is amazing! will inform my team
|
Also, Fabio Grätz do you folks use - <https://github.com/libffcv/ffcv>?
created this - <https://github.com/flyteorg/flyte/issues/3615>
| Mh no, not using it but looks interesting. <https://arxiv.org/abs/2209.13705|This> paper compares ffcv to other libraries including squirrel which my previous company built. The authors didn’t use many features of squirrel though, otherwise would be faster.
When I look at the code snippets on ffcv’s website+github, I’d say that this all should live in user code though, just needs an image with dependencies installed. Which job should Flyte take in your opinion?
> For example, we could not run FFCV with a dataset hosted in an S3 bucket to perform our remote experiments.
(From the comparison paper)
This is a downside for a data loading library tbh. Squirrel from my previous company uses fsspec and was designed for remote loading.
|
Mh no, not using it but looks interesting. <https://arxiv.org/abs/2209.13705|This> paper compares ffcv to other libraries including squirrel which my previous company built. The authors didn’t use many features of squirrel though, otherwise would be faster.
When I look at the code snippets on ffcv’s website+github, I’d say that this all should live in user code though, just needs an image with dependencies installed. Which job should Flyte take in your opinion?
> For example, we could not run FFCV with a dataset hosted in an S3 bucket to perform our remote experiments.
(From the comparison paper)
This is a downside for a data loading library tbh. Squirrel from my previous company uses fsspec and was designed for remote loading.
| Interesting that it would only work with local files
|
Thanks for finishing the Pr and merging :rocket:
| Flyteplugins too
|
Flyteplugins too
| Yes, saw it :slightly_smiling_face:
I will amend the flytesnacks docs PR with the min_replicas change and ping for review there as well.
|
Yes, saw it :slightly_smiling_face:
I will amend the flytesnacks docs PR with the min_replicas change and ping for review there as well.
| amazing work!
|
amazing work!
| <https://github.com/flyteorg/flytesnacks/pull/987>
Can you pls take a look?
|
<https://github.com/flyteorg/flytesnacks/pull/987>
Can you pls take a look?
| The changes look good! but there’s an unrelated issue with the `sphinxcontrib-yt` package in our docs :disappointed:
needa take a look
|
Danielle Curammeng -
Docs: <https://docs.flyte.org/projects/flytekit/en/latest/design/control_plane.html#design-control-plane|FlyteRemote>
Docs: <https://docs.flyte.org/projects/cookbook/en/latest/auto/core/flyte_basics/imperative_wf_style.html#sphx-glr-auto-core-flyte-basics-imperative-wf-style-py|ImperativeWorkflows>
Example of JupyterLab workflow building and registering
| Ketan Umare hm I just realized. Was this last line supposed to be a link? Maybe there’s an example there solving the issue I just shared
|