Unusual predicted structures from pretrained OpenFold on Pascal GPU
See original GitHub issueThis is most likely some kind of local configuration error, but I haven’t been able to pin down the cause. If anyone has encountered this behavior before or has an idea of what might be wrong based on these output structures, any hints would be greatly appreciated!
Expected behavior:
run_pretrained_openfold.py
outputs predicted structures comparable to AlphaFold or OpenFold Colab output.
I expected a structure similar to this unrelaxed prediction from OpenFold Colab model_1
with finetuning_1.pt
:
Actual behavior:
My run_pretrained_openfold.py
predicted structures are not similar to AlphaFold or OpenFold Colab output.
Predictions from model_1
with finetuning_1.pt
(unrelaxed in tan, relaxed in blue):
Predictions from model_1
with params_model_1.npz
:
Predictions from model_1
with params_model_1.npz
using alignments from ColabFold MMseqs2 (ColabFold had predicted a reasonable expected structure):
Context:
4 x NVidia 1080-TI GPUs Using CUDA 11.3 (if other system data is relevant I can find it)
input/short.fasta
>query
MAAHKGAEHHHKAAEHHEQAAKHHHAAAEHHEKGEHEQAAHHADTAYAHHKHAEEHAAQAAKHDAEHHAPKPH
Run command:
python3 run_pretrained_openfold.py \
input \
data/pdb_mmcif/mmcif_files/ \
--output_dir output \
--cpus 16 \
--preset reduced_dbs \
--uniref90_database_path data/uniref90/uniref90.fasta \
--mgnify_database_path data/mgnify/mgy_clusters_2018_12.fa \
--pdb70_database_path data/pdb70/pdb70 \
--uniclust30_database_path data/uniclust30/uniclust30_2018_08/uniclust30_2018_08 \
--bfd_database_path data/bfd/bfd_metaclust_clu_complete_id30_c90_final_seq.sorted_opt \
--model_device "cuda:0" \
--jackhmmer_binary_path $venv_bin_dir/jackhmmer \
--hhblits_binary_path $venv_bin_dir/hhblits \
--hhsearch_binary_path $venv_bin_dir/hhsearch \
--kalign_binary_path $venv_bin_dir/kalign \
--config_preset "model_1" \
--openfold_checkpoint_path openfold/resources/openfold_params/finetuning_1.pt
Other configurations I tried, which produced similarly strange outputs:
- Removing
--openfold_checkpoint_path
to just use the AlphaFold weights - Using
--config_preset "model_1_ptm"
withfinetuning_ptm_2.pt
- Using
--use_precomputed_alignments
with alignment results from a previous OpenFold output - Using
--use_precomputed_alignments
with.a3m
results from ColabFold - Using
full_dbs
instead ofreduced_dbs
Issue Analytics
- State:
- Created a year ago
- Comments:11 (6 by maintainers)
Ah my bad I never added it to the config. You’ll have to disable
use_memory_efficient_kernel
manually inopenfold/model/evoformer.py
. There should only be one occurrence of it there; change the setting fromnot use_lma
toFalse
.I’ve confirmed that this was resolved with the fix for https://github.com/aqlaboratory/openfold/issues/172! 👏